00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2467 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3732 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.172 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.218 Using shallow fetch with depth 1 00:00:00.218 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.218 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.827 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.838 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.850 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.850 > git config core.sparsecheckout # timeout=10 00:00:06.861 > git read-tree -mu HEAD # timeout=10 00:00:06.876 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.904 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.904 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.984 [Pipeline] Start of Pipeline 00:00:06.999 [Pipeline] library 00:00:07.000 Loading library shm_lib@master 00:00:07.001 Library shm_lib@master is cached. Copying from home. 00:00:07.015 [Pipeline] node 00:00:07.044 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.046 [Pipeline] { 00:00:07.053 [Pipeline] catchError 00:00:07.054 [Pipeline] { 00:00:07.063 [Pipeline] wrap 00:00:07.070 [Pipeline] { 00:00:07.076 [Pipeline] stage 00:00:07.077 [Pipeline] { (Prologue) 00:00:07.290 [Pipeline] sh 00:00:08.162 + logger -p user.info -t JENKINS-CI 00:00:08.194 [Pipeline] echo 00:00:08.196 Node: WFP4 00:00:08.204 [Pipeline] sh 00:00:08.547 [Pipeline] setCustomBuildProperty 00:00:08.557 [Pipeline] echo 00:00:08.559 Cleanup processes 00:00:08.564 [Pipeline] sh 00:00:08.857 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.857 6061 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.873 [Pipeline] sh 00:00:09.169 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.169 ++ grep -v 'sudo pgrep' 00:00:09.169 ++ awk '{print $1}' 00:00:09.169 + sudo kill -9 00:00:09.169 + true 00:00:09.185 [Pipeline] cleanWs 00:00:09.196 [WS-CLEANUP] Deleting project workspace... 00:00:09.196 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.209 [WS-CLEANUP] done 00:00:09.214 [Pipeline] setCustomBuildProperty 00:00:09.230 [Pipeline] sh 00:00:09.517 + sudo git config --global --replace-all safe.directory '*' 00:00:09.646 [Pipeline] httpRequest 00:00:11.425 [Pipeline] echo 00:00:11.426 Sorcerer 10.211.164.20 is alive 00:00:11.435 [Pipeline] retry 00:00:11.437 [Pipeline] { 00:00:11.450 [Pipeline] httpRequest 00:00:11.455 HttpMethod: GET 00:00:11.455 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.456 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.481 Response Code: HTTP/1.1 200 OK 00:00:11.481 Success: Status code 200 is in the accepted range: 200,404 00:00:11.481 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.845 [Pipeline] } 00:00:29.863 [Pipeline] // retry 00:00:29.870 [Pipeline] sh 00:00:30.160 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.198 [Pipeline] httpRequest 00:00:30.636 [Pipeline] echo 00:00:30.638 Sorcerer 10.211.164.20 is alive 00:00:30.648 [Pipeline] retry 00:00:30.650 [Pipeline] { 00:00:30.665 [Pipeline] httpRequest 00:00:30.671 HttpMethod: GET 00:00:30.671 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:30.672 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:30.701 Response Code: HTTP/1.1 200 OK 00:00:30.702 Success: Status code 200 is in the accepted range: 200,404 00:00:30.702 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:05.950 [Pipeline] } 00:02:05.969 [Pipeline] // retry 00:02:05.977 [Pipeline] sh 00:02:06.270 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:08.826 [Pipeline] sh 00:02:09.115 + git -C spdk log --oneline -n5 00:02:09.115 e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:09.115 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:09.115 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:09.115 66289a6db build: use VERSION file for storing version 00:02:09.115 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:09.134 [Pipeline] withCredentials 00:02:09.145 > git --version # timeout=10 00:02:09.157 > git --version # 'git version 2.39.2' 00:02:09.180 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:09.182 [Pipeline] { 00:02:09.192 [Pipeline] retry 00:02:09.194 [Pipeline] { 00:02:09.209 [Pipeline] sh 00:02:09.725 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:09.998 [Pipeline] } 00:02:10.018 [Pipeline] // retry 00:02:10.023 [Pipeline] } 00:02:10.042 [Pipeline] // withCredentials 00:02:10.053 [Pipeline] httpRequest 00:02:10.369 [Pipeline] echo 00:02:10.371 Sorcerer 10.211.164.20 is alive 00:02:10.381 [Pipeline] retry 00:02:10.383 [Pipeline] { 00:02:10.401 [Pipeline] httpRequest 00:02:10.406 HttpMethod: GET 00:02:10.406 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:10.407 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:10.410 Response Code: HTTP/1.1 200 OK 00:02:10.411 Success: Status code 200 is in the accepted range: 200,404 00:02:10.411 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:11.614 [Pipeline] } 00:02:11.631 [Pipeline] // retry 00:02:11.640 [Pipeline] sh 00:02:11.925 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:13.320 [Pipeline] sh 00:02:13.609 + git -C dpdk log --oneline -n5 00:02:13.609 caf0f5d395 version: 22.11.4 00:02:13.609 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:13.609 dc9c799c7d vhost: fix missing spinlock unlock 00:02:13.609 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:13.609 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:13.620 [Pipeline] } 00:02:13.635 [Pipeline] // stage 00:02:13.645 [Pipeline] stage 00:02:13.647 [Pipeline] { (Prepare) 00:02:13.668 [Pipeline] writeFile 00:02:13.683 [Pipeline] sh 00:02:13.969 + logger -p user.info -t JENKINS-CI 00:02:13.984 [Pipeline] sh 00:02:14.274 + logger -p user.info -t JENKINS-CI 00:02:14.287 [Pipeline] sh 00:02:14.572 + cat autorun-spdk.conf 00:02:14.572 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.572 SPDK_TEST_NVMF=1 00:02:14.572 SPDK_TEST_NVME_CLI=1 00:02:14.572 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.572 SPDK_TEST_NVMF_NICS=e810 00:02:14.572 SPDK_TEST_VFIOUSER=1 00:02:14.572 SPDK_RUN_UBSAN=1 00:02:14.572 NET_TYPE=phy 00:02:14.572 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.572 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:14.579 RUN_NIGHTLY=1 00:02:14.585 [Pipeline] readFile 00:02:14.618 [Pipeline] withEnv 00:02:14.619 [Pipeline] { 00:02:14.631 [Pipeline] sh 00:02:14.916 + set -ex 00:02:14.916 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:14.916 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:14.916 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.916 ++ SPDK_TEST_NVMF=1 00:02:14.916 ++ SPDK_TEST_NVME_CLI=1 00:02:14.916 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.916 ++ SPDK_TEST_NVMF_NICS=e810 00:02:14.916 ++ SPDK_TEST_VFIOUSER=1 00:02:14.916 ++ SPDK_RUN_UBSAN=1 00:02:14.916 ++ NET_TYPE=phy 00:02:14.916 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.916 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:14.916 ++ RUN_NIGHTLY=1 00:02:14.916 + case $SPDK_TEST_NVMF_NICS in 00:02:14.916 + DRIVERS=ice 00:02:14.916 + [[ tcp == \r\d\m\a ]] 00:02:14.916 + [[ -n ice ]] 00:02:14.916 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:14.916 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:14.916 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:14.916 rmmod: ERROR: Module i40iw is not currently loaded 00:02:14.916 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:14.916 + true 00:02:14.916 + for D in $DRIVERS 00:02:14.916 + sudo modprobe ice 00:02:14.916 + exit 0 00:02:14.925 [Pipeline] } 00:02:14.940 [Pipeline] // withEnv 00:02:14.945 [Pipeline] } 00:02:14.958 [Pipeline] // stage 00:02:14.968 [Pipeline] catchError 00:02:14.969 [Pipeline] { 00:02:14.982 [Pipeline] timeout 00:02:14.982 Timeout set to expire in 1 hr 0 min 00:02:14.984 [Pipeline] { 00:02:14.997 [Pipeline] stage 00:02:14.999 [Pipeline] { (Tests) 00:02:15.012 [Pipeline] sh 00:02:15.301 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.301 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.301 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.301 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:15.301 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.301 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:15.301 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:15.301 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:15.301 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:15.301 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:15.301 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:15.301 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:15.301 + source /etc/os-release 00:02:15.301 ++ NAME='Fedora Linux' 00:02:15.301 ++ VERSION='39 (Cloud Edition)' 00:02:15.301 ++ ID=fedora 00:02:15.301 ++ VERSION_ID=39 00:02:15.301 ++ VERSION_CODENAME= 00:02:15.301 ++ PLATFORM_ID=platform:f39 00:02:15.301 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:15.301 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:15.301 ++ LOGO=fedora-logo-icon 00:02:15.301 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:15.301 ++ HOME_URL=https://fedoraproject.org/ 00:02:15.301 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:15.301 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:15.301 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:15.301 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:15.301 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:15.301 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:15.301 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:15.301 ++ SUPPORT_END=2024-11-12 00:02:15.301 ++ VARIANT='Cloud Edition' 00:02:15.301 ++ VARIANT_ID=cloud 00:02:15.301 + uname -a 00:02:15.301 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:15.301 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:17.841 Hugepages 00:02:17.841 node hugesize free / total 00:02:17.841 node0 1048576kB 0 / 0 00:02:17.841 node0 2048kB 0 / 0 00:02:17.841 node1 1048576kB 0 / 0 00:02:17.841 node1 2048kB 0 / 0 00:02:17.841 00:02:17.841 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.841 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:17.841 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:17.841 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:17.841 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:17.841 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:17.841 + rm -f /tmp/spdk-ld-path 00:02:17.841 + source autorun-spdk.conf 00:02:17.841 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.841 ++ SPDK_TEST_NVMF=1 00:02:17.841 ++ SPDK_TEST_NVME_CLI=1 00:02:17.841 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:17.841 ++ SPDK_TEST_NVMF_NICS=e810 00:02:17.841 ++ SPDK_TEST_VFIOUSER=1 00:02:17.841 ++ SPDK_RUN_UBSAN=1 00:02:17.841 ++ NET_TYPE=phy 00:02:17.841 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:17.841 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:17.841 ++ RUN_NIGHTLY=1 00:02:17.841 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.841 + [[ -n '' ]] 00:02:17.841 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.841 + for M in /var/spdk/build-*-manifest.txt 00:02:17.841 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.841 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:17.841 + for M in /var/spdk/build-*-manifest.txt 00:02:17.841 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.841 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:17.841 + for M in /var/spdk/build-*-manifest.txt 00:02:17.841 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:17.841 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:17.841 ++ uname 00:02:17.841 + [[ Linux == \L\i\n\u\x ]] 00:02:17.841 + sudo dmesg -T 00:02:17.841 + sudo dmesg --clear 00:02:17.841 + dmesg_pid=7548 00:02:17.841 + [[ Fedora Linux == FreeBSD ]] 00:02:17.841 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.841 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.841 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.841 + sudo dmesg -Tw 00:02:17.841 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.841 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.841 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.841 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.841 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.841 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.841 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.841 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.841 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.841 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.841 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.841 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.100 22:08:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:18.100 22:08:07 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.100 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.100 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:18.100 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:18.101 22:08:07 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:18.101 22:08:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:18.101 22:08:07 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.101 22:08:07 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:18.101 22:08:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.101 22:08:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:18.101 22:08:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:18.101 22:08:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.101 22:08:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.101 22:08:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.101 22:08:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.101 22:08:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.101 22:08:07 -- paths/export.sh@5 -- $ export PATH 00:02:18.101 22:08:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.101 22:08:07 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.101 22:08:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:18.101 22:08:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734383287.XXXXXX 00:02:18.101 22:08:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734383287.rHCT2W 00:02:18.101 22:08:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:18.101 22:08:07 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:18.101 22:08:07 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:18.101 22:08:07 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:18.101 22:08:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:18.101 22:08:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.101 22:08:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:18.101 22:08:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:18.101 22:08:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.101 22:08:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:18.101 22:08:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:18.101 22:08:07 -- pm/common@17 -- $ local monitor 00:02:18.101 22:08:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.101 22:08:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.101 22:08:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.101 22:08:07 -- pm/common@21 -- $ date +%s 00:02:18.101 22:08:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.101 22:08:07 -- pm/common@21 -- $ date +%s 00:02:18.101 22:08:07 -- pm/common@25 -- $ sleep 1 00:02:18.101 22:08:07 -- pm/common@21 -- $ date +%s 00:02:18.101 22:08:07 -- pm/common@21 -- $ date +%s 00:02:18.101 22:08:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734383287 00:02:18.101 22:08:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734383287 00:02:18.101 22:08:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734383287 00:02:18.101 22:08:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734383287 00:02:18.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734383287_collect-vmstat.pm.log 00:02:18.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734383287_collect-cpu-load.pm.log 00:02:18.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734383287_collect-cpu-temp.pm.log 00:02:18.101 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734383287_collect-bmc-pm.bmc.pm.log 00:02:19.041 22:08:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:19.041 22:08:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.041 22:08:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.041 22:08:08 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.041 22:08:08 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.041 Mon Dec 16 09:08:08 PM UTC 2024 00:02:19.041 22:08:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.041 v25.01-rc1-2-ge01cb43b8 00:02:19.041 22:08:08 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:19.041 22:08:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.041 22:08:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.041 22:08:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.041 22:08:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.041 22:08:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.041 ************************************ 00:02:19.041 START TEST ubsan 00:02:19.041 ************************************ 00:02:19.041 22:08:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:19.041 using ubsan 00:02:19.041 00:02:19.041 real 0m0.000s 00:02:19.041 user 0m0.000s 00:02:19.041 sys 0m0.000s 00:02:19.041 22:08:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.041 22:08:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.041 ************************************ 00:02:19.041 END TEST ubsan 00:02:19.041 ************************************ 00:02:19.302 22:08:08 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:19.302 22:08:08 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:19.302 22:08:08 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:19.302 22:08:08 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:19.302 22:08:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.302 22:08:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.302 ************************************ 00:02:19.302 START TEST build_native_dpdk 00:02:19.302 ************************************ 00:02:19.302 22:08:08 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:19.302 caf0f5d395 version: 22.11.4 00:02:19.302 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:19.302 dc9c799c7d vhost: fix missing spinlock unlock 00:02:19.302 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:19.302 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:19.302 22:08:08 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:19.303 patching file config/rte_config.h 00:02:19.303 Hunk #1 succeeded at 60 (offset 1 line). 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:19.303 patching file lib/pcapng/rte_pcapng.c 00:02:19.303 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:19.303 22:08:08 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:19.303 22:08:08 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:25.890 The Meson build system 00:02:25.890 Version: 1.5.0 00:02:25.890 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:25.890 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:25.890 Build type: native build 00:02:25.890 Program cat found: YES (/usr/bin/cat) 00:02:25.890 Project name: DPDK 00:02:25.890 Project version: 22.11.4 00:02:25.890 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.890 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:25.890 Host machine cpu family: x86_64 00:02:25.890 Host machine cpu: x86_64 00:02:25.890 Message: ## Building in Developer Mode ## 00:02:25.890 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.890 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:25.890 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.890 Program objdump found: YES (/usr/bin/objdump) 00:02:25.890 Program python3 found: YES (/usr/bin/python3) 00:02:25.890 Program cat found: YES (/usr/bin/cat) 00:02:25.890 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:25.890 Checking for size of "void *" : 8 00:02:25.890 Checking for size of "void *" : 8 (cached) 00:02:25.890 Library m found: YES 00:02:25.890 Library numa found: YES 00:02:25.890 Has header "numaif.h" : YES 00:02:25.890 Library fdt found: NO 00:02:25.890 Library execinfo found: NO 00:02:25.890 Has header "execinfo.h" : YES 00:02:25.890 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.890 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.890 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.890 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.890 Run-time dependency openssl found: YES 3.1.1 00:02:25.890 Run-time dependency libpcap found: YES 1.10.4 00:02:25.890 Has header "pcap.h" with dependency libpcap: YES 00:02:25.890 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.890 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.890 Compiler for C supports arguments -Wformat: YES 00:02:25.890 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:25.890 Compiler for C supports arguments -Wformat-security: NO 00:02:25.890 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.890 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.890 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.890 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.890 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.890 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.890 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.890 Compiler for C supports arguments -Wundef: YES 00:02:25.890 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.890 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.890 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.890 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.890 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:25.890 Compiler for C supports arguments -mavx512f: YES 00:02:25.890 Checking if "AVX512 checking" compiles: YES 00:02:25.890 Fetching value of define "__SSE4_2__" : 1 00:02:25.890 Fetching value of define "__AES__" : 1 00:02:25.890 Fetching value of define "__AVX__" : 1 00:02:25.890 Fetching value of define "__AVX2__" : 1 00:02:25.890 Fetching value of define "__AVX512BW__" : 1 00:02:25.890 Fetching value of define "__AVX512CD__" : 1 00:02:25.890 Fetching value of define "__AVX512DQ__" : 1 00:02:25.890 Fetching value of define "__AVX512F__" : 1 00:02:25.890 Fetching value of define "__AVX512VL__" : 1 00:02:25.890 Fetching value of define "__PCLMUL__" : 1 00:02:25.890 Fetching value of define "__RDRND__" : 1 00:02:25.890 Fetching value of define "__RDSEED__" : 1 00:02:25.890 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:25.890 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.890 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.890 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.890 Checking for function "getentropy" : YES 00:02:25.890 Message: lib/eal: Defining dependency "eal" 00:02:25.890 Message: lib/ring: Defining dependency "ring" 00:02:25.890 Message: lib/rcu: Defining dependency "rcu" 00:02:25.890 Message: lib/mempool: Defining dependency "mempool" 00:02:25.890 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.890 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:25.890 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:25.890 Compiler for C supports arguments -mpclmul: YES 00:02:25.890 Compiler for C supports arguments -maes: YES 00:02:25.890 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.890 Compiler for C supports arguments -mavx512bw: YES 00:02:25.890 Compiler for C supports arguments -mavx512dq: YES 00:02:25.890 Compiler for C supports arguments -mavx512vl: YES 00:02:25.890 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.890 Compiler for C supports arguments -mavx2: YES 00:02:25.890 Compiler for C supports arguments -mavx: YES 00:02:25.890 Message: lib/net: Defining dependency "net" 00:02:25.890 Message: lib/meter: Defining dependency "meter" 00:02:25.890 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.890 Message: lib/pci: Defining dependency "pci" 00:02:25.890 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.890 Message: lib/metrics: Defining dependency "metrics" 00:02:25.890 Message: lib/hash: Defining dependency "hash" 00:02:25.890 Message: lib/timer: Defining dependency "timer" 00:02:25.890 Fetching value of define "__AVX2__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.890 Message: lib/acl: Defining dependency "acl" 00:02:25.890 Message: lib/bbdev: Defining dependency "bbdev" 00:02:25.890 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:25.890 Run-time dependency libelf found: YES 0.191 00:02:25.890 Message: lib/bpf: Defining dependency "bpf" 00:02:25.890 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:25.890 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.890 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.890 Message: lib/distributor: Defining dependency "distributor" 00:02:25.890 Message: lib/efd: Defining dependency "efd" 00:02:25.890 Message: lib/eventdev: Defining dependency "eventdev" 00:02:25.890 Message: lib/gpudev: Defining dependency "gpudev" 00:02:25.890 Message: lib/gro: Defining dependency "gro" 00:02:25.890 Message: lib/gso: Defining dependency "gso" 00:02:25.890 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:25.890 Message: lib/jobstats: Defining dependency "jobstats" 00:02:25.890 Message: lib/latencystats: Defining dependency "latencystats" 00:02:25.890 Message: lib/lpm: Defining dependency "lpm" 00:02:25.890 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:25.890 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:25.890 Message: lib/member: Defining dependency "member" 00:02:25.890 Message: lib/pcapng: Defining dependency "pcapng" 00:02:25.890 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.890 Message: lib/power: Defining dependency "power" 00:02:25.890 Message: lib/rawdev: Defining dependency "rawdev" 00:02:25.890 Message: lib/regexdev: Defining dependency "regexdev" 00:02:25.890 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.890 Message: lib/rib: Defining dependency "rib" 00:02:25.890 Message: lib/reorder: Defining dependency "reorder" 00:02:25.890 Message: lib/sched: Defining dependency "sched" 00:02:25.890 Message: lib/security: Defining dependency "security" 00:02:25.890 Message: lib/stack: Defining dependency "stack" 00:02:25.890 Has header "linux/userfaultfd.h" : YES 00:02:25.890 Message: lib/vhost: Defining dependency "vhost" 00:02:25.890 Message: lib/ipsec: Defining dependency "ipsec" 00:02:25.890 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.890 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.890 Message: lib/fib: Defining dependency "fib" 00:02:25.890 Message: lib/port: Defining dependency "port" 00:02:25.890 Message: lib/pdump: Defining dependency "pdump" 00:02:25.890 Message: lib/table: Defining dependency "table" 00:02:25.890 Message: lib/pipeline: Defining dependency "pipeline" 00:02:25.890 Message: lib/graph: Defining dependency "graph" 00:02:25.890 Message: lib/node: Defining dependency "node" 00:02:25.890 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.890 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.890 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.890 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.890 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:25.890 Compiler for C supports arguments -Wno-unused-value: YES 00:02:25.890 Compiler for C supports arguments -Wno-format: YES 00:02:25.891 Compiler for C supports arguments -Wno-format-security: YES 00:02:25.891 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:26.461 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:26.461 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:26.461 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:26.461 Fetching value of define "__AVX2__" : 1 (cached) 00:02:26.461 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.461 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.461 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.461 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:26.461 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:26.461 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:26.461 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:26.461 Configuring doxy-api.conf using configuration 00:02:26.461 Program sphinx-build found: NO 00:02:26.461 Configuring rte_build_config.h using configuration 00:02:26.461 Message: 00:02:26.461 ================= 00:02:26.461 Applications Enabled 00:02:26.461 ================= 00:02:26.461 00:02:26.461 apps: 00:02:26.461 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:26.461 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:26.461 test-security-perf, 00:02:26.461 00:02:26.461 Message: 00:02:26.461 ================= 00:02:26.461 Libraries Enabled 00:02:26.461 ================= 00:02:26.461 00:02:26.461 libs: 00:02:26.462 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:26.462 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:26.462 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:26.462 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:26.462 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:26.462 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:26.462 table, pipeline, graph, node, 00:02:26.462 00:02:26.462 Message: 00:02:26.462 =============== 00:02:26.462 Drivers Enabled 00:02:26.462 =============== 00:02:26.462 00:02:26.462 common: 00:02:26.462 00:02:26.462 bus: 00:02:26.462 pci, vdev, 00:02:26.462 mempool: 00:02:26.462 ring, 00:02:26.462 dma: 00:02:26.462 00:02:26.462 net: 00:02:26.462 i40e, 00:02:26.462 raw: 00:02:26.462 00:02:26.462 crypto: 00:02:26.462 00:02:26.462 compress: 00:02:26.462 00:02:26.462 regex: 00:02:26.462 00:02:26.462 vdpa: 00:02:26.462 00:02:26.462 event: 00:02:26.462 00:02:26.462 baseband: 00:02:26.462 00:02:26.462 gpu: 00:02:26.462 00:02:26.462 00:02:26.462 Message: 00:02:26.462 ================= 00:02:26.462 Content Skipped 00:02:26.462 ================= 00:02:26.462 00:02:26.462 apps: 00:02:26.462 00:02:26.462 libs: 00:02:26.462 kni: explicitly disabled via build config (deprecated lib) 00:02:26.462 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:26.462 00:02:26.462 drivers: 00:02:26.462 common/cpt: not in enabled drivers build config 00:02:26.462 common/dpaax: not in enabled drivers build config 00:02:26.462 common/iavf: not in enabled drivers build config 00:02:26.462 common/idpf: not in enabled drivers build config 00:02:26.462 common/mvep: not in enabled drivers build config 00:02:26.462 common/octeontx: not in enabled drivers build config 00:02:26.462 bus/auxiliary: not in enabled drivers build config 00:02:26.462 bus/dpaa: not in enabled drivers build config 00:02:26.462 bus/fslmc: not in enabled drivers build config 00:02:26.462 bus/ifpga: not in enabled drivers build config 00:02:26.462 bus/vmbus: not in enabled drivers build config 00:02:26.462 common/cnxk: not in enabled drivers build config 00:02:26.462 common/mlx5: not in enabled drivers build config 00:02:26.462 common/qat: not in enabled drivers build config 00:02:26.462 common/sfc_efx: not in enabled drivers build config 00:02:26.462 mempool/bucket: not in enabled drivers build config 00:02:26.462 mempool/cnxk: not in enabled drivers build config 00:02:26.462 mempool/dpaa: not in enabled drivers build config 00:02:26.462 mempool/dpaa2: not in enabled drivers build config 00:02:26.462 mempool/octeontx: not in enabled drivers build config 00:02:26.462 mempool/stack: not in enabled drivers build config 00:02:26.462 dma/cnxk: not in enabled drivers build config 00:02:26.462 dma/dpaa: not in enabled drivers build config 00:02:26.462 dma/dpaa2: not in enabled drivers build config 00:02:26.462 dma/hisilicon: not in enabled drivers build config 00:02:26.462 dma/idxd: not in enabled drivers build config 00:02:26.462 dma/ioat: not in enabled drivers build config 00:02:26.462 dma/skeleton: not in enabled drivers build config 00:02:26.462 net/af_packet: not in enabled drivers build config 00:02:26.462 net/af_xdp: not in enabled drivers build config 00:02:26.462 net/ark: not in enabled drivers build config 00:02:26.462 net/atlantic: not in enabled drivers build config 00:02:26.462 net/avp: not in enabled drivers build config 00:02:26.462 net/axgbe: not in enabled drivers build config 00:02:26.462 net/bnx2x: not in enabled drivers build config 00:02:26.462 net/bnxt: not in enabled drivers build config 00:02:26.462 net/bonding: not in enabled drivers build config 00:02:26.462 net/cnxk: not in enabled drivers build config 00:02:26.462 net/cxgbe: not in enabled drivers build config 00:02:26.462 net/dpaa: not in enabled drivers build config 00:02:26.462 net/dpaa2: not in enabled drivers build config 00:02:26.462 net/e1000: not in enabled drivers build config 00:02:26.462 net/ena: not in enabled drivers build config 00:02:26.462 net/enetc: not in enabled drivers build config 00:02:26.462 net/enetfec: not in enabled drivers build config 00:02:26.462 net/enic: not in enabled drivers build config 00:02:26.462 net/failsafe: not in enabled drivers build config 00:02:26.462 net/fm10k: not in enabled drivers build config 00:02:26.462 net/gve: not in enabled drivers build config 00:02:26.462 net/hinic: not in enabled drivers build config 00:02:26.462 net/hns3: not in enabled drivers build config 00:02:26.462 net/iavf: not in enabled drivers build config 00:02:26.462 net/ice: not in enabled drivers build config 00:02:26.462 net/idpf: not in enabled drivers build config 00:02:26.462 net/igc: not in enabled drivers build config 00:02:26.462 net/ionic: not in enabled drivers build config 00:02:26.462 net/ipn3ke: not in enabled drivers build config 00:02:26.462 net/ixgbe: not in enabled drivers build config 00:02:26.462 net/kni: not in enabled drivers build config 00:02:26.462 net/liquidio: not in enabled drivers build config 00:02:26.462 net/mana: not in enabled drivers build config 00:02:26.462 net/memif: not in enabled drivers build config 00:02:26.462 net/mlx4: not in enabled drivers build config 00:02:26.462 net/mlx5: not in enabled drivers build config 00:02:26.462 net/mvneta: not in enabled drivers build config 00:02:26.462 net/mvpp2: not in enabled drivers build config 00:02:26.462 net/netvsc: not in enabled drivers build config 00:02:26.462 net/nfb: not in enabled drivers build config 00:02:26.462 net/nfp: not in enabled drivers build config 00:02:26.462 net/ngbe: not in enabled drivers build config 00:02:26.462 net/null: not in enabled drivers build config 00:02:26.462 net/octeontx: not in enabled drivers build config 00:02:26.462 net/octeon_ep: not in enabled drivers build config 00:02:26.462 net/pcap: not in enabled drivers build config 00:02:26.462 net/pfe: not in enabled drivers build config 00:02:26.462 net/qede: not in enabled drivers build config 00:02:26.462 net/ring: not in enabled drivers build config 00:02:26.462 net/sfc: not in enabled drivers build config 00:02:26.462 net/softnic: not in enabled drivers build config 00:02:26.462 net/tap: not in enabled drivers build config 00:02:26.462 net/thunderx: not in enabled drivers build config 00:02:26.462 net/txgbe: not in enabled drivers build config 00:02:26.462 net/vdev_netvsc: not in enabled drivers build config 00:02:26.462 net/vhost: not in enabled drivers build config 00:02:26.462 net/virtio: not in enabled drivers build config 00:02:26.463 net/vmxnet3: not in enabled drivers build config 00:02:26.463 raw/cnxk_bphy: not in enabled drivers build config 00:02:26.463 raw/cnxk_gpio: not in enabled drivers build config 00:02:26.463 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:26.463 raw/ifpga: not in enabled drivers build config 00:02:26.463 raw/ntb: not in enabled drivers build config 00:02:26.463 raw/skeleton: not in enabled drivers build config 00:02:26.463 crypto/armv8: not in enabled drivers build config 00:02:26.463 crypto/bcmfs: not in enabled drivers build config 00:02:26.463 crypto/caam_jr: not in enabled drivers build config 00:02:26.463 crypto/ccp: not in enabled drivers build config 00:02:26.463 crypto/cnxk: not in enabled drivers build config 00:02:26.463 crypto/dpaa_sec: not in enabled drivers build config 00:02:26.463 crypto/dpaa2_sec: not in enabled drivers build config 00:02:26.463 crypto/ipsec_mb: not in enabled drivers build config 00:02:26.463 crypto/mlx5: not in enabled drivers build config 00:02:26.463 crypto/mvsam: not in enabled drivers build config 00:02:26.463 crypto/nitrox: not in enabled drivers build config 00:02:26.463 crypto/null: not in enabled drivers build config 00:02:26.463 crypto/octeontx: not in enabled drivers build config 00:02:26.463 crypto/openssl: not in enabled drivers build config 00:02:26.463 crypto/scheduler: not in enabled drivers build config 00:02:26.463 crypto/uadk: not in enabled drivers build config 00:02:26.463 crypto/virtio: not in enabled drivers build config 00:02:26.463 compress/isal: not in enabled drivers build config 00:02:26.463 compress/mlx5: not in enabled drivers build config 00:02:26.463 compress/octeontx: not in enabled drivers build config 00:02:26.463 compress/zlib: not in enabled drivers build config 00:02:26.463 regex/mlx5: not in enabled drivers build config 00:02:26.463 regex/cn9k: not in enabled drivers build config 00:02:26.463 vdpa/ifc: not in enabled drivers build config 00:02:26.463 vdpa/mlx5: not in enabled drivers build config 00:02:26.463 vdpa/sfc: not in enabled drivers build config 00:02:26.463 event/cnxk: not in enabled drivers build config 00:02:26.463 event/dlb2: not in enabled drivers build config 00:02:26.463 event/dpaa: not in enabled drivers build config 00:02:26.463 event/dpaa2: not in enabled drivers build config 00:02:26.463 event/dsw: not in enabled drivers build config 00:02:26.463 event/opdl: not in enabled drivers build config 00:02:26.463 event/skeleton: not in enabled drivers build config 00:02:26.463 event/sw: not in enabled drivers build config 00:02:26.463 event/octeontx: not in enabled drivers build config 00:02:26.463 baseband/acc: not in enabled drivers build config 00:02:26.463 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:26.463 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:26.463 baseband/la12xx: not in enabled drivers build config 00:02:26.463 baseband/null: not in enabled drivers build config 00:02:26.463 baseband/turbo_sw: not in enabled drivers build config 00:02:26.463 gpu/cuda: not in enabled drivers build config 00:02:26.463 00:02:26.463 00:02:26.463 Build targets in project: 311 00:02:26.463 00:02:26.463 DPDK 22.11.4 00:02:26.463 00:02:26.463 User defined options 00:02:26.463 libdir : lib 00:02:26.463 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:26.463 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:26.463 c_link_args : 00:02:26.463 enable_docs : false 00:02:26.463 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:26.463 enable_kmods : false 00:02:26.463 machine : native 00:02:26.463 tests : false 00:02:26.463 00:02:26.463 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.463 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:26.463 22:08:16 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:26.725 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:26.725 [1/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:26.725 [2/740] Generating lib/rte_telemetry_def with a custom command 00:02:26.725 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:26.725 [4/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:26.725 [5/740] Generating lib/rte_ring_mingw with a custom command 00:02:26.725 [6/740] Generating lib/rte_ring_def with a custom command 00:02:26.725 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.725 [8/740] Generating lib/rte_rcu_mingw with a custom command 00:02:26.725 [9/740] Generating lib/rte_eal_def with a custom command 00:02:26.725 [10/740] Generating lib/rte_mempool_mingw with a custom command 00:02:26.725 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.725 [12/740] Generating lib/rte_mempool_def with a custom command 00:02:26.725 [13/740] Generating lib/rte_rcu_def with a custom command 00:02:26.725 [14/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:26.725 [15/740] Generating lib/rte_mbuf_def with a custom command 00:02:26.725 [16/740] Generating lib/rte_eal_mingw with a custom command 00:02:26.725 [17/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.725 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.725 [19/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.725 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.725 [21/740] Generating lib/rte_net_def with a custom command 00:02:26.725 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.725 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.725 [24/740] Generating lib/rte_net_mingw with a custom command 00:02:26.725 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.990 [26/740] Generating lib/rte_meter_mingw with a custom command 00:02:26.990 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.990 [28/740] Generating lib/rte_meter_def with a custom command 00:02:26.990 [29/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.990 [30/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:26.990 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.990 [32/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.990 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.990 [34/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.990 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.990 [36/740] Linking static target lib/librte_kvargs.a 00:02:26.990 [37/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.990 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.990 [39/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.990 [40/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.990 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.990 [42/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.990 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.990 [44/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:26.990 [45/740] Generating lib/rte_ethdev_def with a custom command 00:02:26.990 [46/740] Generating lib/rte_pci_def with a custom command 00:02:26.990 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.990 [48/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:26.990 [49/740] Generating lib/rte_pci_mingw with a custom command 00:02:26.990 [50/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:26.990 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.990 [52/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.990 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.990 [54/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.990 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.990 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.990 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.990 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.990 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.990 [60/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.990 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.990 [62/740] Generating lib/rte_cmdline_def with a custom command 00:02:26.990 [63/740] Generating lib/rte_metrics_def with a custom command 00:02:26.990 [64/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:26.990 [65/740] Generating lib/rte_metrics_mingw with a custom command 00:02:26.990 [66/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:26.990 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.990 [68/740] Generating lib/rte_hash_def with a custom command 00:02:26.990 [69/740] Generating lib/rte_hash_mingw with a custom command 00:02:26.990 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.990 [71/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.990 [72/740] Generating lib/rte_timer_def with a custom command 00:02:26.990 [73/740] Generating lib/rte_timer_mingw with a custom command 00:02:26.990 [74/740] Linking static target lib/librte_ring.a 00:02:26.990 [75/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.990 [76/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.990 [77/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.990 [78/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.990 [79/740] Linking static target lib/librte_pci.a 00:02:26.990 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.990 [81/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.990 [82/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.990 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:26.990 [84/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.990 [85/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.990 [86/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.990 [87/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.990 [88/740] Generating lib/rte_bbdev_def with a custom command 00:02:26.990 [89/740] Generating lib/rte_acl_mingw with a custom command 00:02:26.990 [90/740] Generating lib/rte_bitratestats_def with a custom command 00:02:26.991 [91/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:26.991 [92/740] Generating lib/rte_acl_def with a custom command 00:02:26.991 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.991 [94/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:27.270 [95/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.270 [96/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.270 [97/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.270 [98/740] Linking static target lib/librte_meter.a 00:02:27.270 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:27.270 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.270 [101/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.270 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.270 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.270 [104/740] Generating lib/rte_cfgfile_def with a custom command 00:02:27.270 [105/740] Generating lib/rte_bpf_def with a custom command 00:02:27.270 [106/740] Generating lib/rte_bpf_mingw with a custom command 00:02:27.270 [107/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:27.270 [108/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.270 [109/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.270 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.270 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.270 [112/740] Generating lib/rte_compressdev_def with a custom command 00:02:27.270 [113/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:27.270 [114/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.270 [115/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.270 [116/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.270 [117/740] Generating lib/rte_cryptodev_def with a custom command 00:02:27.270 [118/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:27.270 [119/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.270 [120/740] Generating lib/rte_distributor_mingw with a custom command 00:02:27.270 [121/740] Generating lib/rte_distributor_def with a custom command 00:02:27.270 [122/740] Generating lib/rte_efd_mingw with a custom command 00:02:27.270 [123/740] Generating lib/rte_efd_def with a custom command 00:02:27.270 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.270 [125/740] Generating lib/rte_eventdev_def with a custom command 00:02:27.270 [126/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.270 [127/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:27.270 [128/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.270 [129/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.542 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.542 [131/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:27.542 [132/740] Generating lib/rte_gpudev_def with a custom command 00:02:27.542 [133/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.542 [134/740] Linking target lib/librte_kvargs.so.23.0 00:02:27.542 [135/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.542 [136/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.542 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.542 [138/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.542 [139/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.542 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.542 [141/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.542 [142/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.542 [143/740] Generating lib/rte_gro_def with a custom command 00:02:27.542 [144/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.542 [145/740] Generating lib/rte_gro_mingw with a custom command 00:02:27.542 [146/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:27.542 [147/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.542 [148/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.542 [149/740] Generating lib/rte_gso_def with a custom command 00:02:27.542 [150/740] Generating lib/rte_gso_mingw with a custom command 00:02:27.542 [151/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.542 [152/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:27.542 [153/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.542 [154/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.542 [155/740] Linking static target lib/librte_cfgfile.a 00:02:27.542 [156/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.542 [157/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.542 [158/740] Generating lib/rte_ip_frag_def with a custom command 00:02:27.542 [159/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.542 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.542 [161/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:27.542 [162/740] Generating lib/rte_jobstats_def with a custom command 00:02:27.542 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.542 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.807 [165/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.807 [166/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:27.807 [167/740] Linking static target lib/librte_cmdline.a 00:02:27.807 [168/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.807 [169/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.807 [170/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.807 [171/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.807 [172/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:27.807 [173/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:27.807 [174/740] Generating lib/rte_latencystats_def with a custom command 00:02:27.807 [175/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:27.807 [176/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:27.807 [177/740] Linking static target lib/librte_metrics.a 00:02:27.807 [178/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.807 [179/740] Linking static target lib/librte_timer.a 00:02:27.807 [180/740] Generating lib/rte_lpm_def with a custom command 00:02:27.807 [181/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.807 [182/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.807 [183/740] Generating lib/rte_lpm_mingw with a custom command 00:02:27.807 [184/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:27.807 [185/740] Generating lib/rte_member_def with a custom command 00:02:27.807 [186/740] Linking static target lib/librte_telemetry.a 00:02:27.807 [187/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.807 [188/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.807 [189/740] Generating lib/rte_member_mingw with a custom command 00:02:27.807 [190/740] Generating lib/rte_pcapng_def with a custom command 00:02:27.807 [191/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:27.807 [192/740] Linking static target lib/librte_net.a 00:02:27.807 [193/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:27.807 [194/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:27.807 [195/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.807 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:27.807 [197/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:27.807 [198/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:27.807 [199/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.807 [200/740] Linking static target lib/librte_jobstats.a 00:02:27.807 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:27.807 [202/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:27.807 [203/740] Linking static target lib/librte_bitratestats.a 00:02:27.807 [204/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.807 [205/740] Generating lib/rte_power_def with a custom command 00:02:27.807 [206/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.807 [207/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.807 [208/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.807 [209/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.807 [210/740] Generating lib/rte_power_mingw with a custom command 00:02:27.807 [211/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.807 [212/740] Generating lib/rte_rawdev_def with a custom command 00:02:27.807 [213/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.807 [214/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:27.807 [215/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.076 [216/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.076 [217/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:28.076 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:28.076 [219/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:28.076 [220/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:28.076 [221/740] Generating lib/rte_dmadev_def with a custom command 00:02:28.076 [222/740] Generating lib/rte_regexdev_def with a custom command 00:02:28.076 [223/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.076 [224/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:28.076 [225/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:28.076 [226/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:28.076 [227/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:28.076 [228/740] Generating lib/rte_rib_mingw with a custom command 00:02:28.076 [229/740] Generating lib/rte_rib_def with a custom command 00:02:28.076 [230/740] Generating lib/rte_reorder_def with a custom command 00:02:28.076 [231/740] Generating lib/rte_reorder_mingw with a custom command 00:02:28.076 [232/740] Generating lib/rte_sched_def with a custom command 00:02:28.076 [233/740] Generating lib/rte_sched_mingw with a custom command 00:02:28.076 [234/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.076 [235/740] Generating lib/rte_security_def with a custom command 00:02:28.076 [236/740] Generating lib/rte_security_mingw with a custom command 00:02:28.076 [237/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:28.076 [238/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:28.076 [239/740] Generating lib/rte_stack_mingw with a custom command 00:02:28.076 [240/740] Generating lib/rte_stack_def with a custom command 00:02:28.076 [241/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:28.076 [242/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:28.076 [243/740] Generating lib/rte_vhost_def with a custom command 00:02:28.076 [244/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:28.076 [245/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:28.076 [246/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:28.076 [247/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:28.076 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:28.076 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:28.076 [250/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:28.076 [251/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:28.076 [252/740] Generating lib/rte_vhost_mingw with a custom command 00:02:28.076 [253/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.076 [254/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.076 [255/740] Linking static target lib/librte_stack.a 00:02:28.076 [256/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.076 [257/740] Linking static target lib/librte_compressdev.a 00:02:28.339 [258/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:28.339 [259/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:28.339 [260/740] Generating lib/rte_ipsec_def with a custom command 00:02:28.339 [261/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:28.339 [262/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.339 [263/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.339 [264/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:28.339 [265/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:28.339 [266/740] Linking static target lib/librte_mempool.a 00:02:28.339 [267/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.339 [268/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.339 [269/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.339 [270/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.339 [271/740] Linking static target lib/librte_rcu.a 00:02:28.339 [272/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.339 [273/740] Generating lib/rte_fib_mingw with a custom command 00:02:28.339 [274/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:28.339 [275/740] Generating lib/rte_fib_def with a custom command 00:02:28.340 [276/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:28.340 [277/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:28.340 [278/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.340 [279/740] Linking static target lib/librte_bbdev.a 00:02:28.340 [280/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:28.340 [281/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:28.340 [282/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:28.340 [283/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.340 [284/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.340 [285/740] Generating lib/rte_port_def with a custom command 00:02:28.340 [286/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:28.340 [287/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:28.340 [288/740] Generating lib/rte_port_mingw with a custom command 00:02:28.340 [289/740] Linking static target lib/librte_rawdev.a 00:02:28.340 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:28.340 [291/740] Linking target lib/librte_telemetry.so.23.0 00:02:28.340 [292/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:28.340 [293/740] Generating lib/rte_pdump_def with a custom command 00:02:28.340 [294/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:28.611 [295/740] Generating lib/rte_pdump_mingw with a custom command 00:02:28.611 [296/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.611 [297/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:28.611 [298/740] Linking static target lib/librte_distributor.a 00:02:28.611 [299/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.611 [300/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:28.611 [301/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.611 [302/740] Linking static target lib/librte_dmadev.a 00:02:28.611 [303/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:28.611 [304/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:28.611 [305/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:28.611 [306/740] Linking static target lib/librte_gpudev.a 00:02:28.611 [307/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:28.611 [308/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:28.611 [309/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:28.611 [310/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.611 [311/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:28.611 [312/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:28.611 [313/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:28.611 [314/740] Linking static target lib/librte_gro.a 00:02:28.611 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:28.611 [316/740] Linking static target lib/librte_gso.a 00:02:28.611 [317/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:28.611 [318/740] Linking static target lib/librte_latencystats.a 00:02:28.611 [319/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:28.611 [320/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:28.878 [321/740] Generating lib/rte_table_def with a custom command 00:02:28.878 [322/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.878 [323/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:28.878 [324/740] Generating lib/rte_table_mingw with a custom command 00:02:28.878 [325/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:28.878 [326/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.878 [327/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:28.878 [328/740] Linking static target lib/librte_regexdev.a 00:02:28.878 [329/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.878 [330/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.878 [331/740] Linking static target lib/librte_eal.a 00:02:28.878 [332/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.878 [333/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.878 [334/740] Linking static target lib/librte_mbuf.a 00:02:28.878 [335/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.878 [336/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:28.878 [337/740] Linking static target lib/librte_power.a 00:02:28.878 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:28.878 [339/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:28.878 [340/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:28.878 [341/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:28.878 [342/740] Linking static target lib/librte_ip_frag.a 00:02:28.878 [343/740] Generating lib/rte_pipeline_def with a custom command 00:02:28.878 [344/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.878 [345/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:29.142 [346/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:29.142 [347/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:29.142 [348/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:29.142 [349/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.142 [350/740] Linking static target lib/librte_pcapng.a 00:02:29.142 [351/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:29.142 [352/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.142 [353/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.142 [354/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:29.142 [355/740] Linking static target lib/librte_bpf.a 00:02:29.142 [356/740] Linking static target lib/librte_reorder.a 00:02:29.142 [357/740] Generating lib/rte_graph_def with a custom command 00:02:29.142 [358/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.142 [359/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.142 [360/740] Generating lib/rte_graph_mingw with a custom command 00:02:29.142 [361/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:29.142 [362/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.142 [363/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:29.142 [364/740] Linking static target lib/librte_security.a 00:02:29.142 [365/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:29.142 [366/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:29.142 [367/740] Generating lib/rte_node_def with a custom command 00:02:29.142 [368/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.142 [369/740] Generating lib/rte_node_mingw with a custom command 00:02:29.142 [370/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:29.142 [371/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.410 [372/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.410 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:29.410 [374/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.410 [375/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:29.410 [376/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:29.410 [377/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:29.410 [378/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:29.410 [379/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:29.410 [380/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:29.410 [381/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:29.410 [382/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.410 [383/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:29.410 [384/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:29.410 [385/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:29.410 [386/740] Linking static target lib/librte_rib.a 00:02:29.410 [387/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:29.410 [388/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:29.410 [389/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:29.410 [390/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:29.410 [391/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:29.410 [392/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:29.410 [393/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.410 [394/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:29.410 [395/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.410 [396/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.410 [397/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.410 [398/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:29.410 [399/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.674 [400/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:29.674 [401/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.674 [402/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.674 [403/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:29.674 [404/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:29.674 [405/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:29.674 [406/740] Linking static target lib/librte_lpm.a 00:02:29.674 [407/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:29.674 [408/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:29.674 [409/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:29.674 [410/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.674 [411/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:29.674 [412/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:29.674 [413/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:29.674 [414/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:29.674 [415/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.674 [416/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:29.674 [417/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.674 [418/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:29.674 [419/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:29.674 [420/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:29.674 [421/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:29.674 [422/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:29.674 [423/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.674 [424/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:29.674 [425/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:29.674 [426/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.674 [427/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:29.674 [428/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:29.674 [429/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:29.674 [430/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:29.674 [431/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.940 [432/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:29.940 [433/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.940 [434/740] Linking static target lib/librte_efd.a 00:02:29.940 [435/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:29.940 [436/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:29.940 [437/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:29.940 [438/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:29.940 [439/740] Linking static target lib/librte_graph.a 00:02:29.940 [440/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.940 [441/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:29.940 [442/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.940 [443/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:29.940 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.940 [445/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:29.940 [446/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.940 [447/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:29.940 [448/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:30.213 [449/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:30.213 [450/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:30.213 [451/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.213 [452/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.213 [453/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:30.213 [454/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:30.213 [455/740] Linking static target lib/librte_fib.a 00:02:30.213 [456/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.213 [457/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.213 [458/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:30.213 [459/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.213 [460/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.213 [461/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:30.214 [462/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:30.214 [463/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:30.214 [464/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.482 [465/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.482 [466/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:30.482 [467/740] Linking static target lib/librte_pdump.a 00:02:30.482 [468/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.482 [469/740] Linking static target drivers/librte_bus_vdev.a 00:02:30.482 [470/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.482 [471/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.482 [472/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.482 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:30.482 [474/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:30.749 [475/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.750 [476/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.750 [477/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.750 [478/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:30.750 [479/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:30.750 [480/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:30.750 [481/740] Linking static target lib/librte_cryptodev.a 00:02:30.750 [482/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.750 [483/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.750 [484/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:30.750 [485/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:30.750 [486/740] Linking static target drivers/librte_bus_pci.a 00:02:30.750 [487/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:30.750 [488/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:30.750 [489/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:30.750 [490/740] Linking static target lib/librte_sched.a 00:02:30.750 [491/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:30.750 [492/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:30.750 [493/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:30.750 [494/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:30.750 [495/740] Linking static target lib/librte_table.a 00:02:30.750 [496/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:30.750 [497/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.750 [498/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:31.018 [499/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.018 [500/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:31.018 [501/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:31.018 [502/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:31.018 [503/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:31.018 [504/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:31.018 [505/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:31.018 [506/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.018 [507/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:31.018 [508/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:31.018 [509/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.018 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:31.018 [511/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:31.018 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:31.018 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:31.018 [514/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:31.018 [515/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:31.018 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:31.018 [517/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:31.284 [518/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:31.284 [519/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:31.284 [520/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:31.284 [521/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:31.284 [522/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:31.284 [523/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:31.284 [524/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:31.284 [525/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:31.284 [526/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:31.284 [527/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:31.284 [528/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:31.284 [529/740] Linking static target lib/librte_node.a 00:02:31.284 [530/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:31.284 [531/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:31.284 [532/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:31.284 [533/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.284 [534/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:31.284 [535/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:31.284 [536/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.546 [537/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:31.546 [538/740] Linking static target lib/librte_member.a 00:02:31.546 [539/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:31.546 [540/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:31.546 [541/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.546 [542/740] Linking static target lib/librte_ipsec.a 00:02:31.546 [543/740] Linking static target lib/librte_ethdev.a 00:02:31.547 [544/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.547 [545/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:31.547 [546/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:31.547 [547/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.547 [548/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.547 [549/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.547 [550/740] Linking static target drivers/librte_mempool_ring.a 00:02:31.547 [551/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:31.547 [552/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:31.547 [553/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:31.547 [554/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:31.547 [555/740] Linking static target lib/librte_eventdev.a 00:02:31.547 [556/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.547 [557/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:31.547 [558/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:31.547 [559/740] Linking static target lib/librte_port.a 00:02:31.547 [560/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:31.806 [561/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:31.806 [562/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:31.806 [563/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.806 [564/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:31.806 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:31.806 [566/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.806 [567/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:31.806 [568/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:31.806 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:31.806 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:31.806 [571/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:31.806 [572/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:31.806 [573/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:31.806 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:31.806 [575/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:31.806 [576/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:31.806 [577/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.806 [578/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:31.806 [579/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:31.806 [580/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:31.806 [581/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:32.066 [582/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:32.066 [583/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:32.066 [584/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:32.066 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:32.066 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:32.066 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:32.066 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:32.066 [589/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:32.066 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:32.066 [591/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:32.066 [592/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.066 [593/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:32.066 [594/740] Linking static target lib/librte_hash.a 00:02:32.325 [595/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:32.325 [596/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:32.325 [597/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.325 [598/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:32.325 [599/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:32.325 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:32.325 [601/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:32.325 [602/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:32.325 [603/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:32.584 [604/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:32.584 [605/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:32.584 [606/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:32.584 [607/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:32.584 [608/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:32.584 [609/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:32.843 [610/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:32.843 [611/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:32.843 [612/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:32.843 [613/740] Linking static target lib/librte_acl.a 00:02:33.103 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:33.103 [615/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.103 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:33.361 [617/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.361 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:33.621 [619/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.879 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:33.880 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:34.138 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:34.397 [623/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.656 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:34.656 [625/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:35.224 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:35.224 [627/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:35.224 [628/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:35.224 [629/740] Linking static target drivers/librte_net_i40e.a 00:02:35.484 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.743 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:36.002 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:36.002 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.295 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.673 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.673 [636/740] Linking target lib/librte_eal.so.23.0 00:02:40.673 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:40.673 [638/740] Linking target lib/librte_stack.so.23.0 00:02:40.673 [639/740] Linking target lib/librte_ring.so.23.0 00:02:40.673 [640/740] Linking target lib/librte_meter.so.23.0 00:02:40.673 [641/740] Linking target lib/librte_pci.so.23.0 00:02:40.673 [642/740] Linking target lib/librte_timer.so.23.0 00:02:40.673 [643/740] Linking target lib/librte_jobstats.so.23.0 00:02:40.673 [644/740] Linking target lib/librte_cfgfile.so.23.0 00:02:40.932 [645/740] Linking target lib/librte_dmadev.so.23.0 00:02:40.932 [646/740] Linking target lib/librte_rawdev.so.23.0 00:02:40.932 [647/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:40.932 [648/740] Linking target lib/librte_graph.so.23.0 00:02:40.932 [649/740] Linking target lib/librte_acl.so.23.0 00:02:40.932 [650/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:40.932 [651/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:40.932 [652/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:40.932 [653/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:40.932 [654/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:40.932 [655/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:40.932 [656/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:40.932 [657/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:40.932 [658/740] Linking target lib/librte_rcu.so.23.0 00:02:40.932 [659/740] Linking target lib/librte_mempool.so.23.0 00:02:40.932 [660/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:41.191 [661/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:41.191 [662/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:41.191 [663/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:41.191 [664/740] Linking target lib/librte_mbuf.so.23.0 00:02:41.191 [665/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:41.191 [666/740] Linking target lib/librte_rib.so.23.0 00:02:41.191 [667/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:41.191 [668/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:41.191 [669/740] Linking target lib/librte_bbdev.so.23.0 00:02:41.450 [670/740] Linking target lib/librte_compressdev.so.23.0 00:02:41.450 [671/740] Linking target lib/librte_distributor.so.23.0 00:02:41.450 [672/740] Linking target lib/librte_gpudev.so.23.0 00:02:41.450 [673/740] Linking target lib/librte_reorder.so.23.0 00:02:41.450 [674/740] Linking target lib/librte_net.so.23.0 00:02:41.450 [675/740] Linking target lib/librte_regexdev.so.23.0 00:02:41.450 [676/740] Linking target lib/librte_fib.so.23.0 00:02:41.450 [677/740] Linking target lib/librte_sched.so.23.0 00:02:41.450 [678/740] Linking target lib/librte_cryptodev.so.23.0 00:02:41.450 [679/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:41.450 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:41.450 [681/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:41.450 [682/740] Linking target lib/librte_security.so.23.0 00:02:41.450 [683/740] Linking target lib/librte_hash.so.23.0 00:02:41.450 [684/740] Linking target lib/librte_cmdline.so.23.0 00:02:41.450 [685/740] Linking target lib/librte_ethdev.so.23.0 00:02:41.724 [686/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:41.724 [687/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:41.724 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:41.724 [689/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.724 [690/740] Linking static target lib/librte_vhost.a 00:02:41.724 [691/740] Linking target lib/librte_member.so.23.0 00:02:41.724 [692/740] Linking target lib/librte_efd.so.23.0 00:02:41.724 [693/740] Linking target lib/librte_lpm.so.23.0 00:02:41.724 [694/740] Linking target lib/librte_metrics.so.23.0 00:02:41.724 [695/740] Linking target lib/librte_ipsec.so.23.0 00:02:41.724 [696/740] Linking target lib/librte_pcapng.so.23.0 00:02:41.724 [697/740] Linking target lib/librte_gso.so.23.0 00:02:41.724 [698/740] Linking target lib/librte_gro.so.23.0 00:02:41.724 [699/740] Linking target lib/librte_ip_frag.so.23.0 00:02:41.724 [700/740] Linking target lib/librte_power.so.23.0 00:02:41.724 [701/740] Linking target lib/librte_eventdev.so.23.0 00:02:41.724 [702/740] Linking target lib/librte_bpf.so.23.0 00:02:41.724 [703/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:41.724 [704/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:41.724 [705/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:41.724 [706/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:41.724 [707/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:41.982 [708/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:41.982 [709/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:41.982 [710/740] Linking target lib/librte_node.so.23.0 00:02:41.983 [711/740] Linking target lib/librte_latencystats.so.23.0 00:02:41.983 [712/740] Linking target lib/librte_bitratestats.so.23.0 00:02:41.983 [713/740] Linking target lib/librte_pdump.so.23.0 00:02:41.983 [714/740] Linking target lib/librte_port.so.23.0 00:02:41.983 [715/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:42.242 [716/740] Linking target lib/librte_table.so.23.0 00:02:42.242 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:42.501 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:42.501 [719/740] Linking static target lib/librte_pipeline.a 00:02:42.760 [720/740] Linking target app/dpdk-test-gpudev 00:02:42.760 [721/740] Linking target app/dpdk-proc-info 00:02:42.760 [722/740] Linking target app/dpdk-dumpcap 00:02:42.760 [723/740] Linking target app/dpdk-test-acl 00:02:42.760 [724/740] Linking target app/dpdk-test-bbdev 00:02:42.760 [725/740] Linking target app/dpdk-test-crypto-perf 00:02:42.760 [726/740] Linking target app/dpdk-pdump 00:02:42.760 [727/740] Linking target app/dpdk-test-cmdline 00:02:42.760 [728/740] Linking target app/dpdk-test-sad 00:02:42.761 [729/740] Linking target app/dpdk-test-pipeline 00:02:42.761 [730/740] Linking target app/dpdk-test-regex 00:02:42.761 [731/740] Linking target app/dpdk-test-fib 00:02:42.761 [732/740] Linking target app/dpdk-test-compress-perf 00:02:42.761 [733/740] Linking target app/dpdk-test-flow-perf 00:02:42.761 [734/740] Linking target app/dpdk-test-security-perf 00:02:42.761 [735/740] Linking target app/dpdk-test-eventdev 00:02:42.761 [736/740] Linking target app/dpdk-testpmd 00:02:43.702 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.702 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:46.997 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.997 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:46.997 22:08:36 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:46.997 22:08:36 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:46.997 22:08:36 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:46.997 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:46.997 [0/1] Installing files. 00:02:47.262 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.262 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.263 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.264 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.265 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.266 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:47.267 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:47.267 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.267 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.267 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.268 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:47.531 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:47.531 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:47.531 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:47.531 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:47.531 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.531 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.531 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.531 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.532 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.533 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.534 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:47.535 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:47.535 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:47.535 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:47.535 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:47.535 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:47.535 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:47.535 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:47.535 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:47.535 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:47.535 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:47.535 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:47.535 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:47.535 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:47.535 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:47.535 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:47.535 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:47.535 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:47.535 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:47.535 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:47.535 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:47.535 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:47.535 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:47.535 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:47.535 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:47.535 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:47.535 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:47.535 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:47.535 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:47.535 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:47.535 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:47.536 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:47.536 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:47.536 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:47.536 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:47.536 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:47.536 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:47.536 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:47.536 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:47.536 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:47.536 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:47.536 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:47.536 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:47.536 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:47.536 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:47.536 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:47.536 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:47.536 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:47.536 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:47.536 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:47.536 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:47.536 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:47.536 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:47.536 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:47.536 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:47.536 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:47.536 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:47.536 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:47.536 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:47.536 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:47.536 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:47.536 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:47.536 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:47.536 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:47.536 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:47.536 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:47.536 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:47.536 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:47.536 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:47.536 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:47.536 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:47.536 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:47.536 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:47.536 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:47.536 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:47.536 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:47.536 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:47.536 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:47.536 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:47.536 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:47.536 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:47.536 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:47.536 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:47.536 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:47.536 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:47.536 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:47.536 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:47.536 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:47.536 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:47.536 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:47.536 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:47.536 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:47.536 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:47.536 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:47.536 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:47.536 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:47.536 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:47.536 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:47.536 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:47.536 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:47.536 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:47.536 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:47.536 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:47.536 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:47.536 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:47.536 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:47.536 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:47.536 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:47.536 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:47.536 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:47.536 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:47.536 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:47.537 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:47.537 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:47.537 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:47.537 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:47.537 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:47.537 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:47.537 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:47.537 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:47.537 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:47.537 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:47.537 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:47.537 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:47.537 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:47.537 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:47.537 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:47.537 22:08:37 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:47.796 22:08:37 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.796 00:02:47.796 real 0m28.426s 00:02:47.796 user 7m40.899s 00:02:47.796 sys 1m58.215s 00:02:47.796 22:08:37 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:47.796 22:08:37 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:47.796 ************************************ 00:02:47.796 END TEST build_native_dpdk 00:02:47.796 ************************************ 00:02:47.796 22:08:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:47.796 22:08:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:47.796 22:08:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:47.796 22:08:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:47.796 22:08:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:47.796 22:08:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:47.796 22:08:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:47.796 22:08:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:47.796 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:48.056 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:48.056 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:48.056 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:48.315 Using 'verbs' RDMA provider 00:03:01.468 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:13.688 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:13.948 Creating mk/config.mk...done. 00:03:13.949 Creating mk/cc.flags.mk...done. 00:03:13.949 Type 'make' to build. 00:03:13.949 22:09:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:13.949 22:09:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:13.949 22:09:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:13.949 22:09:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.949 ************************************ 00:03:13.949 START TEST make 00:03:13.949 ************************************ 00:03:13.949 22:09:03 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:15.868 The Meson build system 00:03:15.868 Version: 1.5.0 00:03:15.868 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:15.868 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.868 Build type: native build 00:03:15.868 Project name: libvfio-user 00:03:15.868 Project version: 0.0.1 00:03:15.868 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:15.868 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:15.868 Host machine cpu family: x86_64 00:03:15.868 Host machine cpu: x86_64 00:03:15.868 Run-time dependency threads found: YES 00:03:15.868 Library dl found: YES 00:03:15.868 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:15.868 Run-time dependency json-c found: YES 0.17 00:03:15.868 Run-time dependency cmocka found: YES 1.1.7 00:03:15.868 Program pytest-3 found: NO 00:03:15.868 Program flake8 found: NO 00:03:15.868 Program misspell-fixer found: NO 00:03:15.868 Program restructuredtext-lint found: NO 00:03:15.868 Program valgrind found: YES (/usr/bin/valgrind) 00:03:15.868 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:15.868 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:15.868 Compiler for C supports arguments -Wwrite-strings: YES 00:03:15.868 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:15.868 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:15.868 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:15.868 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:15.868 Build targets in project: 8 00:03:15.868 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:15.868 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:15.868 00:03:15.868 libvfio-user 0.0.1 00:03:15.868 00:03:15.868 User defined options 00:03:15.868 buildtype : debug 00:03:15.868 default_library: shared 00:03:15.868 libdir : /usr/local/lib 00:03:15.868 00:03:15.868 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:16.436 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:16.695 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:16.695 [2/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:16.695 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:16.695 [4/37] Compiling C object samples/null.p/null.c.o 00:03:16.695 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:16.695 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:16.695 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:16.695 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:16.695 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:16.695 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:16.695 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:16.695 [12/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:16.695 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:16.695 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:16.695 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:16.695 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:16.695 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:16.695 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:16.695 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:16.695 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:16.695 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:16.695 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:16.695 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:16.695 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:16.695 [25/37] Compiling C object samples/server.p/server.c.o 00:03:16.695 [26/37] Compiling C object samples/client.p/client.c.o 00:03:16.695 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:16.695 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:16.955 [29/37] Linking target samples/client 00:03:16.955 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:16.955 [31/37] Linking target test/unit_tests 00:03:16.955 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:16.955 [33/37] Linking target samples/server 00:03:16.955 [34/37] Linking target samples/gpio-pci-idio-16 00:03:16.955 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:16.955 [36/37] Linking target samples/null 00:03:16.955 [37/37] Linking target samples/lspci 00:03:16.955 INFO: autodetecting backend as ninja 00:03:16.955 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:17.215 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:17.474 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:17.474 ninja: no work to do. 00:03:44.038 CC lib/ut_mock/mock.o 00:03:44.038 CC lib/log/log.o 00:03:44.038 CC lib/ut/ut.o 00:03:44.038 CC lib/log/log_flags.o 00:03:44.038 CC lib/log/log_deprecated.o 00:03:44.038 LIB libspdk_ut.a 00:03:44.038 LIB libspdk_ut_mock.a 00:03:44.038 LIB libspdk_log.a 00:03:44.038 SO libspdk_ut_mock.so.6.0 00:03:44.038 SO libspdk_ut.so.2.0 00:03:44.038 SO libspdk_log.so.7.1 00:03:44.298 SYMLINK libspdk_ut_mock.so 00:03:44.298 SYMLINK libspdk_ut.so 00:03:44.298 SYMLINK libspdk_log.so 00:03:44.558 CC lib/dma/dma.o 00:03:44.558 CC lib/util/base64.o 00:03:44.558 CC lib/util/bit_array.o 00:03:44.558 CC lib/util/cpuset.o 00:03:44.558 CC lib/util/crc16.o 00:03:44.558 CC lib/util/crc32.o 00:03:44.558 CC lib/util/crc32c.o 00:03:44.558 CC lib/util/crc32_ieee.o 00:03:44.558 CC lib/util/crc64.o 00:03:44.558 CC lib/util/dif.o 00:03:44.558 CXX lib/trace_parser/trace.o 00:03:44.558 CC lib/util/fd.o 00:03:44.558 CC lib/ioat/ioat.o 00:03:44.558 CC lib/util/fd_group.o 00:03:44.558 CC lib/util/file.o 00:03:44.558 CC lib/util/hexlify.o 00:03:44.558 CC lib/util/iov.o 00:03:44.558 CC lib/util/math.o 00:03:44.558 CC lib/util/net.o 00:03:44.558 CC lib/util/pipe.o 00:03:44.558 CC lib/util/strerror_tls.o 00:03:44.558 CC lib/util/string.o 00:03:44.558 CC lib/util/uuid.o 00:03:44.558 CC lib/util/xor.o 00:03:44.558 CC lib/util/zipf.o 00:03:44.558 CC lib/util/md5.o 00:03:44.817 CC lib/vfio_user/host/vfio_user_pci.o 00:03:44.817 CC lib/vfio_user/host/vfio_user.o 00:03:44.817 LIB libspdk_dma.a 00:03:44.817 SO libspdk_dma.so.5.0 00:03:44.817 LIB libspdk_ioat.a 00:03:44.817 SYMLINK libspdk_dma.so 00:03:44.817 SO libspdk_ioat.so.7.0 00:03:45.077 SYMLINK libspdk_ioat.so 00:03:45.077 LIB libspdk_vfio_user.a 00:03:45.077 SO libspdk_vfio_user.so.5.0 00:03:45.077 SYMLINK libspdk_vfio_user.so 00:03:45.077 LIB libspdk_util.a 00:03:45.077 SO libspdk_util.so.10.1 00:03:45.336 SYMLINK libspdk_util.so 00:03:45.595 CC lib/json/json_parse.o 00:03:45.595 CC lib/json/json_util.o 00:03:45.595 CC lib/json/json_write.o 00:03:45.595 CC lib/env_dpdk/env.o 00:03:45.595 CC lib/env_dpdk/memory.o 00:03:45.595 CC lib/env_dpdk/pci.o 00:03:45.595 CC lib/conf/conf.o 00:03:45.595 CC lib/env_dpdk/init.o 00:03:45.595 CC lib/env_dpdk/threads.o 00:03:45.595 CC lib/env_dpdk/pci_ioat.o 00:03:45.595 CC lib/idxd/idxd.o 00:03:45.595 CC lib/env_dpdk/pci_virtio.o 00:03:45.595 CC lib/idxd/idxd_user.o 00:03:45.595 CC lib/env_dpdk/pci_vmd.o 00:03:45.595 CC lib/idxd/idxd_kernel.o 00:03:45.595 CC lib/vmd/vmd.o 00:03:45.595 CC lib/env_dpdk/pci_idxd.o 00:03:45.595 CC lib/vmd/led.o 00:03:45.595 CC lib/rdma_utils/rdma_utils.o 00:03:45.595 CC lib/env_dpdk/pci_event.o 00:03:45.595 CC lib/env_dpdk/sigbus_handler.o 00:03:45.595 CC lib/env_dpdk/pci_dpdk.o 00:03:45.595 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.595 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.853 LIB libspdk_conf.a 00:03:45.853 LIB libspdk_json.a 00:03:45.853 LIB libspdk_rdma_utils.a 00:03:45.853 SO libspdk_conf.so.6.0 00:03:45.853 SO libspdk_json.so.6.0 00:03:45.853 SO libspdk_rdma_utils.so.1.0 00:03:45.853 SYMLINK libspdk_conf.so 00:03:45.853 SYMLINK libspdk_rdma_utils.so 00:03:45.853 SYMLINK libspdk_json.so 00:03:46.113 LIB libspdk_idxd.a 00:03:46.113 SO libspdk_idxd.so.12.1 00:03:46.113 LIB libspdk_vmd.a 00:03:46.113 SO libspdk_vmd.so.6.0 00:03:46.113 SYMLINK libspdk_idxd.so 00:03:46.372 LIB libspdk_trace_parser.a 00:03:46.372 SYMLINK libspdk_vmd.so 00:03:46.372 SO libspdk_trace_parser.so.6.0 00:03:46.372 CC lib/jsonrpc/jsonrpc_server.o 00:03:46.372 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:46.372 CC lib/jsonrpc/jsonrpc_client.o 00:03:46.372 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:46.372 CC lib/rdma_provider/common.o 00:03:46.372 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:46.372 SYMLINK libspdk_trace_parser.so 00:03:46.632 LIB libspdk_jsonrpc.a 00:03:46.632 LIB libspdk_rdma_provider.a 00:03:46.632 SO libspdk_jsonrpc.so.6.0 00:03:46.632 SO libspdk_rdma_provider.so.7.0 00:03:46.632 SYMLINK libspdk_jsonrpc.so 00:03:46.632 SYMLINK libspdk_rdma_provider.so 00:03:46.632 LIB libspdk_env_dpdk.a 00:03:46.632 SO libspdk_env_dpdk.so.15.1 00:03:46.893 SYMLINK libspdk_env_dpdk.so 00:03:46.893 CC lib/rpc/rpc.o 00:03:47.153 LIB libspdk_rpc.a 00:03:47.153 SO libspdk_rpc.so.6.0 00:03:47.153 SYMLINK libspdk_rpc.so 00:03:47.724 CC lib/trace/trace.o 00:03:47.724 CC lib/trace/trace_flags.o 00:03:47.724 CC lib/trace/trace_rpc.o 00:03:47.724 CC lib/keyring/keyring.o 00:03:47.724 CC lib/keyring/keyring_rpc.o 00:03:47.724 CC lib/notify/notify.o 00:03:47.724 CC lib/notify/notify_rpc.o 00:03:47.724 LIB libspdk_notify.a 00:03:47.724 LIB libspdk_keyring.a 00:03:47.724 LIB libspdk_trace.a 00:03:47.724 SO libspdk_notify.so.6.0 00:03:47.724 SO libspdk_keyring.so.2.0 00:03:47.724 SO libspdk_trace.so.11.0 00:03:47.724 SYMLINK libspdk_notify.so 00:03:47.984 SYMLINK libspdk_keyring.so 00:03:47.984 SYMLINK libspdk_trace.so 00:03:48.244 CC lib/thread/thread.o 00:03:48.244 CC lib/thread/iobuf.o 00:03:48.244 CC lib/sock/sock.o 00:03:48.244 CC lib/sock/sock_rpc.o 00:03:48.504 LIB libspdk_sock.a 00:03:48.504 SO libspdk_sock.so.10.0 00:03:48.763 SYMLINK libspdk_sock.so 00:03:49.023 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.023 CC lib/nvme/nvme_ctrlr.o 00:03:49.023 CC lib/nvme/nvme_fabric.o 00:03:49.023 CC lib/nvme/nvme_ns_cmd.o 00:03:49.023 CC lib/nvme/nvme_ns.o 00:03:49.023 CC lib/nvme/nvme_pcie_common.o 00:03:49.023 CC lib/nvme/nvme_pcie.o 00:03:49.023 CC lib/nvme/nvme_qpair.o 00:03:49.023 CC lib/nvme/nvme.o 00:03:49.023 CC lib/nvme/nvme_quirks.o 00:03:49.023 CC lib/nvme/nvme_transport.o 00:03:49.023 CC lib/nvme/nvme_discovery.o 00:03:49.023 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.023 CC lib/nvme/nvme_tcp.o 00:03:49.023 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.023 CC lib/nvme/nvme_opal.o 00:03:49.023 CC lib/nvme/nvme_io_msg.o 00:03:49.023 CC lib/nvme/nvme_poll_group.o 00:03:49.023 CC lib/nvme/nvme_zns.o 00:03:49.023 CC lib/nvme/nvme_stubs.o 00:03:49.023 CC lib/nvme/nvme_auth.o 00:03:49.023 CC lib/nvme/nvme_cuse.o 00:03:49.023 CC lib/nvme/nvme_vfio_user.o 00:03:49.023 CC lib/nvme/nvme_rdma.o 00:03:49.282 LIB libspdk_thread.a 00:03:49.282 SO libspdk_thread.so.11.0 00:03:49.541 SYMLINK libspdk_thread.so 00:03:49.801 CC lib/vfu_tgt/tgt_rpc.o 00:03:49.801 CC lib/vfu_tgt/tgt_endpoint.o 00:03:49.801 CC lib/fsdev/fsdev.o 00:03:49.801 CC lib/accel/accel_rpc.o 00:03:49.801 CC lib/fsdev/fsdev_io.o 00:03:49.801 CC lib/fsdev/fsdev_rpc.o 00:03:49.801 CC lib/accel/accel.o 00:03:49.801 CC lib/accel/accel_sw.o 00:03:49.801 CC lib/virtio/virtio.o 00:03:49.801 CC lib/virtio/virtio_vhost_user.o 00:03:49.801 CC lib/virtio/virtio_vfio_user.o 00:03:49.801 CC lib/init/subsystem.o 00:03:49.801 CC lib/init/json_config.o 00:03:49.801 CC lib/virtio/virtio_pci.o 00:03:49.801 CC lib/init/subsystem_rpc.o 00:03:49.801 CC lib/init/rpc.o 00:03:49.801 CC lib/blob/blobstore.o 00:03:49.801 CC lib/blob/request.o 00:03:49.801 CC lib/blob/zeroes.o 00:03:49.801 CC lib/blob/blob_bs_dev.o 00:03:50.060 LIB libspdk_init.a 00:03:50.061 LIB libspdk_vfu_tgt.a 00:03:50.061 SO libspdk_init.so.6.0 00:03:50.061 SO libspdk_vfu_tgt.so.3.0 00:03:50.061 LIB libspdk_virtio.a 00:03:50.061 SYMLINK libspdk_init.so 00:03:50.061 SO libspdk_virtio.so.7.0 00:03:50.061 SYMLINK libspdk_vfu_tgt.so 00:03:50.320 SYMLINK libspdk_virtio.so 00:03:50.320 LIB libspdk_fsdev.a 00:03:50.320 SO libspdk_fsdev.so.2.0 00:03:50.320 SYMLINK libspdk_fsdev.so 00:03:50.580 CC lib/event/app.o 00:03:50.580 CC lib/event/reactor.o 00:03:50.580 CC lib/event/log_rpc.o 00:03:50.580 CC lib/event/app_rpc.o 00:03:50.580 CC lib/event/scheduler_static.o 00:03:50.580 LIB libspdk_accel.a 00:03:50.580 SO libspdk_accel.so.16.0 00:03:50.839 SYMLINK libspdk_accel.so 00:03:50.839 LIB libspdk_nvme.a 00:03:50.839 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:50.839 SO libspdk_nvme.so.15.0 00:03:50.839 LIB libspdk_event.a 00:03:50.839 SO libspdk_event.so.14.0 00:03:50.839 SYMLINK libspdk_event.so 00:03:51.099 SYMLINK libspdk_nvme.so 00:03:51.099 CC lib/bdev/bdev.o 00:03:51.099 CC lib/bdev/bdev_rpc.o 00:03:51.099 CC lib/bdev/bdev_zone.o 00:03:51.099 CC lib/bdev/part.o 00:03:51.099 CC lib/bdev/scsi_nvme.o 00:03:51.359 LIB libspdk_fuse_dispatcher.a 00:03:51.359 SO libspdk_fuse_dispatcher.so.1.0 00:03:51.359 SYMLINK libspdk_fuse_dispatcher.so 00:03:51.927 LIB libspdk_blob.a 00:03:51.927 SO libspdk_blob.so.12.0 00:03:52.186 SYMLINK libspdk_blob.so 00:03:52.445 CC lib/blobfs/blobfs.o 00:03:52.445 CC lib/lvol/lvol.o 00:03:52.445 CC lib/blobfs/tree.o 00:03:53.014 LIB libspdk_bdev.a 00:03:53.014 SO libspdk_bdev.so.17.0 00:03:53.014 LIB libspdk_blobfs.a 00:03:53.014 SYMLINK libspdk_bdev.so 00:03:53.014 SO libspdk_blobfs.so.11.0 00:03:53.014 LIB libspdk_lvol.a 00:03:53.014 SYMLINK libspdk_blobfs.so 00:03:53.014 SO libspdk_lvol.so.11.0 00:03:53.275 SYMLINK libspdk_lvol.so 00:03:53.275 CC lib/ublk/ublk.o 00:03:53.275 CC lib/ublk/ublk_rpc.o 00:03:53.275 CC lib/nbd/nbd.o 00:03:53.275 CC lib/nvmf/ctrlr.o 00:03:53.275 CC lib/nbd/nbd_rpc.o 00:03:53.275 CC lib/nvmf/ctrlr_discovery.o 00:03:53.275 CC lib/nvmf/ctrlr_bdev.o 00:03:53.275 CC lib/nvmf/subsystem.o 00:03:53.275 CC lib/nvmf/nvmf.o 00:03:53.534 CC lib/nvmf/nvmf_rpc.o 00:03:53.534 CC lib/scsi/dev.o 00:03:53.534 CC lib/nvmf/transport.o 00:03:53.534 CC lib/scsi/lun.o 00:03:53.534 CC lib/nvmf/tcp.o 00:03:53.534 CC lib/scsi/port.o 00:03:53.534 CC lib/nvmf/stubs.o 00:03:53.534 CC lib/scsi/scsi.o 00:03:53.534 CC lib/nvmf/mdns_server.o 00:03:53.534 CC lib/scsi/scsi_bdev.o 00:03:53.534 CC lib/nvmf/vfio_user.o 00:03:53.534 CC lib/ftl/ftl_core.o 00:03:53.534 CC lib/nvmf/rdma.o 00:03:53.534 CC lib/ftl/ftl_init.o 00:03:53.534 CC lib/scsi/scsi_pr.o 00:03:53.534 CC lib/nvmf/auth.o 00:03:53.534 CC lib/scsi/scsi_rpc.o 00:03:53.534 CC lib/scsi/task.o 00:03:53.534 CC lib/ftl/ftl_layout.o 00:03:53.534 CC lib/ftl/ftl_debug.o 00:03:53.534 CC lib/ftl/ftl_io.o 00:03:53.534 CC lib/ftl/ftl_sb.o 00:03:53.534 CC lib/ftl/ftl_l2p_flat.o 00:03:53.534 CC lib/ftl/ftl_l2p.o 00:03:53.534 CC lib/ftl/ftl_nv_cache.o 00:03:53.534 CC lib/ftl/ftl_band.o 00:03:53.534 CC lib/ftl/ftl_band_ops.o 00:03:53.534 CC lib/ftl/ftl_writer.o 00:03:53.534 CC lib/ftl/ftl_rq.o 00:03:53.534 CC lib/ftl/ftl_reloc.o 00:03:53.534 CC lib/ftl/ftl_l2p_cache.o 00:03:53.534 CC lib/ftl/ftl_p2l.o 00:03:53.534 CC lib/ftl/ftl_p2l_log.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:53.534 CC lib/ftl/utils/ftl_md.o 00:03:53.534 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:53.534 CC lib/ftl/utils/ftl_conf.o 00:03:53.534 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.534 CC lib/ftl/utils/ftl_mempool.o 00:03:53.534 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.534 CC lib/ftl/utils/ftl_property.o 00:03:53.534 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.534 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.534 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.534 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.534 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:53.534 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.534 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.534 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.534 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:53.534 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.534 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:53.534 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.534 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.534 CC lib/ftl/base/ftl_base_dev.o 00:03:53.534 CC lib/ftl/ftl_trace.o 00:03:54.103 LIB libspdk_nbd.a 00:03:54.103 LIB libspdk_scsi.a 00:03:54.103 SO libspdk_nbd.so.7.0 00:03:54.103 SO libspdk_scsi.so.9.0 00:03:54.103 SYMLINK libspdk_nbd.so 00:03:54.363 SYMLINK libspdk_scsi.so 00:03:54.363 LIB libspdk_ublk.a 00:03:54.363 SO libspdk_ublk.so.3.0 00:03:54.363 SYMLINK libspdk_ublk.so 00:03:54.623 LIB libspdk_ftl.a 00:03:54.623 CC lib/iscsi/conn.o 00:03:54.623 CC lib/iscsi/init_grp.o 00:03:54.623 CC lib/iscsi/iscsi.o 00:03:54.623 CC lib/iscsi/param.o 00:03:54.623 CC lib/iscsi/portal_grp.o 00:03:54.623 CC lib/iscsi/tgt_node.o 00:03:54.623 CC lib/iscsi/iscsi_subsystem.o 00:03:54.623 CC lib/iscsi/iscsi_rpc.o 00:03:54.623 CC lib/iscsi/task.o 00:03:54.623 CC lib/vhost/vhost.o 00:03:54.623 CC lib/vhost/vhost_rpc.o 00:03:54.623 CC lib/vhost/vhost_scsi.o 00:03:54.623 CC lib/vhost/vhost_blk.o 00:03:54.623 CC lib/vhost/rte_vhost_user.o 00:03:54.623 SO libspdk_ftl.so.9.0 00:03:54.883 SYMLINK libspdk_ftl.so 00:03:55.453 LIB libspdk_nvmf.a 00:03:55.453 SO libspdk_nvmf.so.20.0 00:03:55.453 LIB libspdk_vhost.a 00:03:55.453 SO libspdk_vhost.so.8.0 00:03:55.453 SYMLINK libspdk_nvmf.so 00:03:55.453 SYMLINK libspdk_vhost.so 00:03:55.713 LIB libspdk_iscsi.a 00:03:55.713 SO libspdk_iscsi.so.8.0 00:03:55.713 SYMLINK libspdk_iscsi.so 00:03:56.283 CC module/vfu_device/vfu_virtio.o 00:03:56.283 CC module/vfu_device/vfu_virtio_blk.o 00:03:56.283 CC module/vfu_device/vfu_virtio_scsi.o 00:03:56.283 CC module/vfu_device/vfu_virtio_rpc.o 00:03:56.283 CC module/vfu_device/vfu_virtio_fs.o 00:03:56.283 CC module/env_dpdk/env_dpdk_rpc.o 00:03:56.543 CC module/scheduler/gscheduler/gscheduler.o 00:03:56.543 CC module/accel/error/accel_error_rpc.o 00:03:56.543 CC module/accel/error/accel_error.o 00:03:56.543 CC module/accel/dsa/accel_dsa.o 00:03:56.543 CC module/accel/dsa/accel_dsa_rpc.o 00:03:56.543 CC module/accel/iaa/accel_iaa.o 00:03:56.543 CC module/accel/iaa/accel_iaa_rpc.o 00:03:56.543 LIB libspdk_env_dpdk_rpc.a 00:03:56.543 CC module/blob/bdev/blob_bdev.o 00:03:56.543 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:56.543 CC module/accel/ioat/accel_ioat.o 00:03:56.543 CC module/keyring/linux/keyring.o 00:03:56.543 CC module/accel/ioat/accel_ioat_rpc.o 00:03:56.543 CC module/keyring/linux/keyring_rpc.o 00:03:56.543 CC module/keyring/file/keyring.o 00:03:56.543 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:56.543 CC module/keyring/file/keyring_rpc.o 00:03:56.543 CC module/fsdev/aio/fsdev_aio.o 00:03:56.543 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:56.543 CC module/sock/posix/posix.o 00:03:56.543 CC module/fsdev/aio/linux_aio_mgr.o 00:03:56.543 SO libspdk_env_dpdk_rpc.so.6.0 00:03:56.543 SYMLINK libspdk_env_dpdk_rpc.so 00:03:56.803 LIB libspdk_scheduler_gscheduler.a 00:03:56.803 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.803 LIB libspdk_keyring_linux.a 00:03:56.803 SO libspdk_scheduler_gscheduler.so.4.0 00:03:56.803 LIB libspdk_keyring_file.a 00:03:56.803 LIB libspdk_accel_error.a 00:03:56.803 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:56.803 LIB libspdk_accel_ioat.a 00:03:56.803 LIB libspdk_accel_iaa.a 00:03:56.803 SO libspdk_keyring_linux.so.1.0 00:03:56.803 SO libspdk_keyring_file.so.2.0 00:03:56.803 SO libspdk_accel_error.so.2.0 00:03:56.803 LIB libspdk_scheduler_dynamic.a 00:03:56.803 SO libspdk_accel_ioat.so.6.0 00:03:56.803 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.803 SO libspdk_accel_iaa.so.3.0 00:03:56.803 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.803 SO libspdk_scheduler_dynamic.so.4.0 00:03:56.803 LIB libspdk_blob_bdev.a 00:03:56.803 LIB libspdk_accel_dsa.a 00:03:56.803 SYMLINK libspdk_accel_error.so 00:03:56.803 SYMLINK libspdk_keyring_linux.so 00:03:56.803 SYMLINK libspdk_keyring_file.so 00:03:56.803 SYMLINK libspdk_accel_ioat.so 00:03:56.803 SO libspdk_blob_bdev.so.12.0 00:03:56.803 SO libspdk_accel_dsa.so.5.0 00:03:56.803 SYMLINK libspdk_accel_iaa.so 00:03:56.803 SYMLINK libspdk_scheduler_dynamic.so 00:03:56.803 LIB libspdk_vfu_device.a 00:03:57.063 SYMLINK libspdk_blob_bdev.so 00:03:57.063 SYMLINK libspdk_accel_dsa.so 00:03:57.063 SO libspdk_vfu_device.so.3.0 00:03:57.063 SYMLINK libspdk_vfu_device.so 00:03:57.063 LIB libspdk_fsdev_aio.a 00:03:57.323 SO libspdk_fsdev_aio.so.1.0 00:03:57.324 LIB libspdk_sock_posix.a 00:03:57.324 SO libspdk_sock_posix.so.6.0 00:03:57.324 SYMLINK libspdk_fsdev_aio.so 00:03:57.324 SYMLINK libspdk_sock_posix.so 00:03:57.324 CC module/bdev/malloc/bdev_malloc.o 00:03:57.324 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:57.583 CC module/bdev/gpt/gpt.o 00:03:57.583 CC module/bdev/gpt/vbdev_gpt.o 00:03:57.583 CC module/bdev/error/vbdev_error.o 00:03:57.583 CC module/bdev/error/vbdev_error_rpc.o 00:03:57.583 CC module/bdev/raid/bdev_raid.o 00:03:57.583 CC module/bdev/raid/bdev_raid_rpc.o 00:03:57.583 CC module/bdev/raid/bdev_raid_sb.o 00:03:57.583 CC module/bdev/raid/raid0.o 00:03:57.583 CC module/bdev/raid/raid1.o 00:03:57.583 CC module/bdev/raid/concat.o 00:03:57.583 CC module/bdev/lvol/vbdev_lvol.o 00:03:57.583 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:57.583 CC module/blobfs/bdev/blobfs_bdev.o 00:03:57.583 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:57.583 CC module/bdev/delay/vbdev_delay.o 00:03:57.583 CC module/bdev/passthru/vbdev_passthru.o 00:03:57.583 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:57.583 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:57.583 CC module/bdev/split/vbdev_split.o 00:03:57.583 CC module/bdev/split/vbdev_split_rpc.o 00:03:57.583 CC module/bdev/iscsi/bdev_iscsi.o 00:03:57.583 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:57.583 CC module/bdev/aio/bdev_aio.o 00:03:57.583 CC module/bdev/aio/bdev_aio_rpc.o 00:03:57.583 CC module/bdev/null/bdev_null.o 00:03:57.583 CC module/bdev/null/bdev_null_rpc.o 00:03:57.583 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:57.583 CC module/bdev/nvme/bdev_nvme.o 00:03:57.583 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:57.583 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:57.583 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:57.583 CC module/bdev/ftl/bdev_ftl.o 00:03:57.583 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:57.583 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:57.583 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:57.583 CC module/bdev/nvme/nvme_rpc.o 00:03:57.583 CC module/bdev/nvme/bdev_mdns_client.o 00:03:57.583 CC module/bdev/nvme/vbdev_opal.o 00:03:57.583 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:57.583 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.842 LIB libspdk_blobfs_bdev.a 00:03:57.842 LIB libspdk_bdev_gpt.a 00:03:57.842 LIB libspdk_bdev_error.a 00:03:57.842 SO libspdk_blobfs_bdev.so.6.0 00:03:57.842 SO libspdk_bdev_gpt.so.6.0 00:03:57.842 SO libspdk_bdev_error.so.6.0 00:03:57.842 LIB libspdk_bdev_split.a 00:03:57.842 LIB libspdk_bdev_ftl.a 00:03:57.842 LIB libspdk_bdev_passthru.a 00:03:57.842 LIB libspdk_bdev_null.a 00:03:57.842 SYMLINK libspdk_blobfs_bdev.so 00:03:57.842 SYMLINK libspdk_bdev_gpt.so 00:03:57.842 SO libspdk_bdev_split.so.6.0 00:03:57.842 SO libspdk_bdev_ftl.so.6.0 00:03:57.842 SYMLINK libspdk_bdev_error.so 00:03:57.842 SO libspdk_bdev_null.so.6.0 00:03:57.842 SO libspdk_bdev_passthru.so.6.0 00:03:57.842 LIB libspdk_bdev_malloc.a 00:03:57.842 LIB libspdk_bdev_aio.a 00:03:57.842 LIB libspdk_bdev_iscsi.a 00:03:57.842 LIB libspdk_bdev_delay.a 00:03:57.842 LIB libspdk_bdev_zone_block.a 00:03:57.842 SO libspdk_bdev_malloc.so.6.0 00:03:57.842 SO libspdk_bdev_iscsi.so.6.0 00:03:57.842 SO libspdk_bdev_aio.so.6.0 00:03:57.842 SYMLINK libspdk_bdev_split.so 00:03:57.842 SYMLINK libspdk_bdev_null.so 00:03:57.842 SYMLINK libspdk_bdev_ftl.so 00:03:57.842 SO libspdk_bdev_delay.so.6.0 00:03:57.842 SO libspdk_bdev_zone_block.so.6.0 00:03:57.842 SYMLINK libspdk_bdev_passthru.so 00:03:58.101 SYMLINK libspdk_bdev_aio.so 00:03:58.101 SYMLINK libspdk_bdev_malloc.so 00:03:58.101 SYMLINK libspdk_bdev_iscsi.so 00:03:58.101 SYMLINK libspdk_bdev_delay.so 00:03:58.101 SYMLINK libspdk_bdev_zone_block.so 00:03:58.101 LIB libspdk_bdev_lvol.a 00:03:58.101 LIB libspdk_bdev_virtio.a 00:03:58.101 SO libspdk_bdev_lvol.so.6.0 00:03:58.101 SO libspdk_bdev_virtio.so.6.0 00:03:58.101 SYMLINK libspdk_bdev_lvol.so 00:03:58.101 SYMLINK libspdk_bdev_virtio.so 00:03:58.362 LIB libspdk_bdev_raid.a 00:03:58.362 SO libspdk_bdev_raid.so.6.0 00:03:58.362 SYMLINK libspdk_bdev_raid.so 00:03:59.302 LIB libspdk_bdev_nvme.a 00:03:59.562 SO libspdk_bdev_nvme.so.7.1 00:03:59.562 SYMLINK libspdk_bdev_nvme.so 00:04:00.133 CC module/event/subsystems/vmd/vmd.o 00:04:00.133 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.133 CC module/event/subsystems/sock/sock.o 00:04:00.133 CC module/event/subsystems/fsdev/fsdev.o 00:04:00.133 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.133 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.393 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:00.393 CC module/event/subsystems/keyring/keyring.o 00:04:00.393 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.393 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.393 LIB libspdk_event_vfu_tgt.a 00:04:00.393 LIB libspdk_event_vmd.a 00:04:00.393 LIB libspdk_event_fsdev.a 00:04:00.393 LIB libspdk_event_keyring.a 00:04:00.393 LIB libspdk_event_vhost_blk.a 00:04:00.393 LIB libspdk_event_sock.a 00:04:00.393 LIB libspdk_event_scheduler.a 00:04:00.393 LIB libspdk_event_iobuf.a 00:04:00.393 SO libspdk_event_vfu_tgt.so.3.0 00:04:00.393 SO libspdk_event_vmd.so.6.0 00:04:00.393 SO libspdk_event_keyring.so.1.0 00:04:00.393 SO libspdk_event_fsdev.so.1.0 00:04:00.393 SO libspdk_event_vhost_blk.so.3.0 00:04:00.393 SO libspdk_event_sock.so.5.0 00:04:00.393 SO libspdk_event_scheduler.so.4.0 00:04:00.393 SO libspdk_event_iobuf.so.3.0 00:04:00.393 SYMLINK libspdk_event_vfu_tgt.so 00:04:00.393 SYMLINK libspdk_event_vmd.so 00:04:00.393 SYMLINK libspdk_event_keyring.so 00:04:00.393 SYMLINK libspdk_event_vhost_blk.so 00:04:00.393 SYMLINK libspdk_event_fsdev.so 00:04:00.393 SYMLINK libspdk_event_scheduler.so 00:04:00.393 SYMLINK libspdk_event_sock.so 00:04:00.653 SYMLINK libspdk_event_iobuf.so 00:04:00.913 CC module/event/subsystems/accel/accel.o 00:04:00.913 LIB libspdk_event_accel.a 00:04:01.173 SO libspdk_event_accel.so.6.0 00:04:01.173 SYMLINK libspdk_event_accel.so 00:04:01.434 CC module/event/subsystems/bdev/bdev.o 00:04:01.694 LIB libspdk_event_bdev.a 00:04:01.694 SO libspdk_event_bdev.so.6.0 00:04:01.694 SYMLINK libspdk_event_bdev.so 00:04:02.264 CC module/event/subsystems/ublk/ublk.o 00:04:02.264 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:02.264 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:02.264 CC module/event/subsystems/scsi/scsi.o 00:04:02.264 CC module/event/subsystems/nbd/nbd.o 00:04:02.264 LIB libspdk_event_ublk.a 00:04:02.264 LIB libspdk_event_nbd.a 00:04:02.264 LIB libspdk_event_scsi.a 00:04:02.264 SO libspdk_event_ublk.so.3.0 00:04:02.264 SO libspdk_event_nbd.so.6.0 00:04:02.264 SO libspdk_event_scsi.so.6.0 00:04:02.264 LIB libspdk_event_nvmf.a 00:04:02.264 SYMLINK libspdk_event_ublk.so 00:04:02.264 SYMLINK libspdk_event_nbd.so 00:04:02.264 SYMLINK libspdk_event_scsi.so 00:04:02.264 SO libspdk_event_nvmf.so.6.0 00:04:02.524 SYMLINK libspdk_event_nvmf.so 00:04:02.784 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.784 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.784 LIB libspdk_event_vhost_scsi.a 00:04:02.784 LIB libspdk_event_iscsi.a 00:04:02.784 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.784 SO libspdk_event_iscsi.so.6.0 00:04:03.044 SYMLINK libspdk_event_iscsi.so 00:04:03.044 SYMLINK libspdk_event_vhost_scsi.so 00:04:03.044 SO libspdk.so.6.0 00:04:03.044 SYMLINK libspdk.so 00:04:03.624 CC app/spdk_nvme_identify/identify.o 00:04:03.624 CC app/spdk_top/spdk_top.o 00:04:03.624 CC app/spdk_lspci/spdk_lspci.o 00:04:03.624 CC test/rpc_client/rpc_client_test.o 00:04:03.624 CC app/spdk_nvme_perf/perf.o 00:04:03.624 CXX app/trace/trace.o 00:04:03.624 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.624 CC app/trace_record/trace_record.o 00:04:03.624 TEST_HEADER include/spdk/accel.h 00:04:03.624 TEST_HEADER include/spdk/accel_module.h 00:04:03.624 TEST_HEADER include/spdk/assert.h 00:04:03.624 TEST_HEADER include/spdk/barrier.h 00:04:03.624 TEST_HEADER include/spdk/bdev.h 00:04:03.624 TEST_HEADER include/spdk/base64.h 00:04:03.624 TEST_HEADER include/spdk/bdev_module.h 00:04:03.624 TEST_HEADER include/spdk/bdev_zone.h 00:04:03.624 TEST_HEADER include/spdk/bit_array.h 00:04:03.624 TEST_HEADER include/spdk/bit_pool.h 00:04:03.624 TEST_HEADER include/spdk/blob_bdev.h 00:04:03.624 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:03.624 TEST_HEADER include/spdk/blobfs.h 00:04:03.624 TEST_HEADER include/spdk/blob.h 00:04:03.624 TEST_HEADER include/spdk/conf.h 00:04:03.624 TEST_HEADER include/spdk/config.h 00:04:03.624 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:03.624 TEST_HEADER include/spdk/cpuset.h 00:04:03.624 TEST_HEADER include/spdk/crc16.h 00:04:03.624 TEST_HEADER include/spdk/crc32.h 00:04:03.624 TEST_HEADER include/spdk/crc64.h 00:04:03.624 TEST_HEADER include/spdk/dif.h 00:04:03.624 TEST_HEADER include/spdk/dma.h 00:04:03.624 TEST_HEADER include/spdk/endian.h 00:04:03.624 TEST_HEADER include/spdk/env_dpdk.h 00:04:03.624 TEST_HEADER include/spdk/env.h 00:04:03.624 TEST_HEADER include/spdk/fd_group.h 00:04:03.624 TEST_HEADER include/spdk/fd.h 00:04:03.624 TEST_HEADER include/spdk/event.h 00:04:03.624 TEST_HEADER include/spdk/fsdev.h 00:04:03.624 TEST_HEADER include/spdk/file.h 00:04:03.624 TEST_HEADER include/spdk/fsdev_module.h 00:04:03.624 TEST_HEADER include/spdk/hexlify.h 00:04:03.624 TEST_HEADER include/spdk/gpt_spec.h 00:04:03.624 TEST_HEADER include/spdk/ftl.h 00:04:03.624 CC app/nvmf_tgt/nvmf_main.o 00:04:03.624 TEST_HEADER include/spdk/init.h 00:04:03.624 TEST_HEADER include/spdk/idxd.h 00:04:03.624 TEST_HEADER include/spdk/idxd_spec.h 00:04:03.624 TEST_HEADER include/spdk/histogram_data.h 00:04:03.624 TEST_HEADER include/spdk/iscsi_spec.h 00:04:03.624 TEST_HEADER include/spdk/ioat.h 00:04:03.624 TEST_HEADER include/spdk/jsonrpc.h 00:04:03.624 CC app/iscsi_tgt/iscsi_tgt.o 00:04:03.624 TEST_HEADER include/spdk/json.h 00:04:03.624 TEST_HEADER include/spdk/keyring.h 00:04:03.624 TEST_HEADER include/spdk/ioat_spec.h 00:04:03.624 CC app/spdk_dd/spdk_dd.o 00:04:03.624 TEST_HEADER include/spdk/keyring_module.h 00:04:03.624 TEST_HEADER include/spdk/likely.h 00:04:03.624 TEST_HEADER include/spdk/log.h 00:04:03.624 TEST_HEADER include/spdk/md5.h 00:04:03.624 TEST_HEADER include/spdk/lvol.h 00:04:03.624 TEST_HEADER include/spdk/mmio.h 00:04:03.624 TEST_HEADER include/spdk/memory.h 00:04:03.624 TEST_HEADER include/spdk/nbd.h 00:04:03.624 TEST_HEADER include/spdk/net.h 00:04:03.624 TEST_HEADER include/spdk/notify.h 00:04:03.624 TEST_HEADER include/spdk/nvme.h 00:04:03.624 TEST_HEADER include/spdk/nvme_intel.h 00:04:03.624 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:03.624 TEST_HEADER include/spdk/nvme_spec.h 00:04:03.624 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:03.624 TEST_HEADER include/spdk/nvme_zns.h 00:04:03.624 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:03.624 TEST_HEADER include/spdk/nvmf.h 00:04:03.624 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:03.624 TEST_HEADER include/spdk/nvmf_spec.h 00:04:03.624 TEST_HEADER include/spdk/opal.h 00:04:03.624 TEST_HEADER include/spdk/nvmf_transport.h 00:04:03.624 TEST_HEADER include/spdk/pci_ids.h 00:04:03.624 TEST_HEADER include/spdk/opal_spec.h 00:04:03.624 TEST_HEADER include/spdk/queue.h 00:04:03.624 TEST_HEADER include/spdk/pipe.h 00:04:03.624 CC app/spdk_tgt/spdk_tgt.o 00:04:03.624 TEST_HEADER include/spdk/rpc.h 00:04:03.624 TEST_HEADER include/spdk/scheduler.h 00:04:03.624 TEST_HEADER include/spdk/reduce.h 00:04:03.624 TEST_HEADER include/spdk/scsi.h 00:04:03.624 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.624 TEST_HEADER include/spdk/sock.h 00:04:03.624 TEST_HEADER include/spdk/stdinc.h 00:04:03.624 TEST_HEADER include/spdk/thread.h 00:04:03.624 TEST_HEADER include/spdk/string.h 00:04:03.624 TEST_HEADER include/spdk/trace.h 00:04:03.624 TEST_HEADER include/spdk/trace_parser.h 00:04:03.624 TEST_HEADER include/spdk/ublk.h 00:04:03.624 TEST_HEADER include/spdk/tree.h 00:04:03.624 TEST_HEADER include/spdk/util.h 00:04:03.624 TEST_HEADER include/spdk/version.h 00:04:03.625 TEST_HEADER include/spdk/uuid.h 00:04:03.625 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.625 TEST_HEADER include/spdk/vhost.h 00:04:03.625 TEST_HEADER include/spdk/xor.h 00:04:03.625 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.625 TEST_HEADER include/spdk/vmd.h 00:04:03.625 TEST_HEADER include/spdk/zipf.h 00:04:03.625 CXX test/cpp_headers/accel.o 00:04:03.625 CXX test/cpp_headers/accel_module.o 00:04:03.625 CXX test/cpp_headers/assert.o 00:04:03.625 CXX test/cpp_headers/barrier.o 00:04:03.625 CXX test/cpp_headers/base64.o 00:04:03.625 CXX test/cpp_headers/bdev_module.o 00:04:03.625 CXX test/cpp_headers/bdev.o 00:04:03.625 CXX test/cpp_headers/bdev_zone.o 00:04:03.625 CXX test/cpp_headers/bit_pool.o 00:04:03.625 CXX test/cpp_headers/blobfs_bdev.o 00:04:03.625 CXX test/cpp_headers/bit_array.o 00:04:03.625 CXX test/cpp_headers/blobfs.o 00:04:03.625 CXX test/cpp_headers/blob.o 00:04:03.625 CXX test/cpp_headers/config.o 00:04:03.625 CXX test/cpp_headers/cpuset.o 00:04:03.625 CXX test/cpp_headers/conf.o 00:04:03.625 CXX test/cpp_headers/blob_bdev.o 00:04:03.625 CXX test/cpp_headers/crc16.o 00:04:03.625 CXX test/cpp_headers/crc32.o 00:04:03.625 CXX test/cpp_headers/crc64.o 00:04:03.625 CXX test/cpp_headers/dif.o 00:04:03.625 CXX test/cpp_headers/dma.o 00:04:03.625 CXX test/cpp_headers/endian.o 00:04:03.625 CXX test/cpp_headers/env_dpdk.o 00:04:03.625 CXX test/cpp_headers/env.o 00:04:03.625 CXX test/cpp_headers/event.o 00:04:03.625 CXX test/cpp_headers/fsdev.o 00:04:03.625 CXX test/cpp_headers/fd_group.o 00:04:03.625 CXX test/cpp_headers/file.o 00:04:03.625 CXX test/cpp_headers/ftl.o 00:04:03.625 CXX test/cpp_headers/fd.o 00:04:03.625 CXX test/cpp_headers/fsdev_module.o 00:04:03.625 CXX test/cpp_headers/gpt_spec.o 00:04:03.625 CXX test/cpp_headers/histogram_data.o 00:04:03.625 CXX test/cpp_headers/hexlify.o 00:04:03.625 CXX test/cpp_headers/init.o 00:04:03.625 CXX test/cpp_headers/idxd_spec.o 00:04:03.625 CXX test/cpp_headers/idxd.o 00:04:03.625 CXX test/cpp_headers/iscsi_spec.o 00:04:03.625 CXX test/cpp_headers/ioat.o 00:04:03.625 CXX test/cpp_headers/keyring.o 00:04:03.625 CXX test/cpp_headers/ioat_spec.o 00:04:03.625 CXX test/cpp_headers/jsonrpc.o 00:04:03.625 CXX test/cpp_headers/json.o 00:04:03.625 CXX test/cpp_headers/keyring_module.o 00:04:03.625 CXX test/cpp_headers/log.o 00:04:03.625 CXX test/cpp_headers/likely.o 00:04:03.625 CXX test/cpp_headers/lvol.o 00:04:03.625 CXX test/cpp_headers/memory.o 00:04:03.625 CXX test/cpp_headers/md5.o 00:04:03.625 CXX test/cpp_headers/mmio.o 00:04:03.625 CXX test/cpp_headers/nbd.o 00:04:03.625 CXX test/cpp_headers/net.o 00:04:03.625 CXX test/cpp_headers/notify.o 00:04:03.625 CXX test/cpp_headers/nvme.o 00:04:03.625 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.625 CXX test/cpp_headers/nvme_intel.o 00:04:03.625 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.625 CXX test/cpp_headers/nvme_zns.o 00:04:03.625 CXX test/cpp_headers/nvme_spec.o 00:04:03.625 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.625 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.625 CXX test/cpp_headers/nvmf.o 00:04:03.625 CXX test/cpp_headers/nvmf_spec.o 00:04:03.625 CXX test/cpp_headers/nvmf_transport.o 00:04:03.625 CXX test/cpp_headers/opal.o 00:04:03.625 CXX test/cpp_headers/opal_spec.o 00:04:03.901 CC examples/ioat/perf/perf.o 00:04:03.901 CXX test/cpp_headers/pci_ids.o 00:04:03.901 CC test/thread/poller_perf/poller_perf.o 00:04:03.901 CC examples/util/zipf/zipf.o 00:04:03.901 CC examples/ioat/verify/verify.o 00:04:03.901 CC test/app/histogram_perf/histogram_perf.o 00:04:03.901 CC test/app/jsoncat/jsoncat.o 00:04:03.901 CC test/app/stub/stub.o 00:04:03.901 CC app/fio/nvme/fio_plugin.o 00:04:03.901 CC test/env/memory/memory_ut.o 00:04:03.901 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.901 CC test/env/pci/pci_ut.o 00:04:03.901 CC test/dma/test_dma/test_dma.o 00:04:03.901 CC test/env/vtophys/vtophys.o 00:04:03.901 CC test/app/bdev_svc/bdev_svc.o 00:04:03.901 LINK spdk_lspci 00:04:03.901 CC app/fio/bdev/fio_plugin.o 00:04:04.167 LINK interrupt_tgt 00:04:04.167 LINK nvmf_tgt 00:04:04.167 LINK rpc_client_test 00:04:04.167 CC test/env/mem_callbacks/mem_callbacks.o 00:04:04.429 LINK spdk_nvme_discover 00:04:04.429 LINK iscsi_tgt 00:04:04.429 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:04.429 LINK poller_perf 00:04:04.429 CXX test/cpp_headers/pipe.o 00:04:04.429 CXX test/cpp_headers/queue.o 00:04:04.429 CXX test/cpp_headers/reduce.o 00:04:04.429 CXX test/cpp_headers/rpc.o 00:04:04.429 CXX test/cpp_headers/scheduler.o 00:04:04.429 CXX test/cpp_headers/scsi.o 00:04:04.429 CXX test/cpp_headers/scsi_spec.o 00:04:04.429 LINK zipf 00:04:04.429 LINK jsoncat 00:04:04.429 CXX test/cpp_headers/sock.o 00:04:04.429 CXX test/cpp_headers/stdinc.o 00:04:04.429 CXX test/cpp_headers/string.o 00:04:04.429 CXX test/cpp_headers/thread.o 00:04:04.429 CXX test/cpp_headers/trace.o 00:04:04.429 CXX test/cpp_headers/trace_parser.o 00:04:04.429 LINK histogram_perf 00:04:04.429 CXX test/cpp_headers/tree.o 00:04:04.429 CXX test/cpp_headers/util.o 00:04:04.429 CXX test/cpp_headers/ublk.o 00:04:04.429 CXX test/cpp_headers/uuid.o 00:04:04.429 CXX test/cpp_headers/version.o 00:04:04.429 LINK spdk_tgt 00:04:04.429 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.429 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.429 CXX test/cpp_headers/vhost.o 00:04:04.429 CXX test/cpp_headers/vmd.o 00:04:04.429 CXX test/cpp_headers/xor.o 00:04:04.429 CXX test/cpp_headers/zipf.o 00:04:04.429 LINK spdk_trace_record 00:04:04.429 LINK env_dpdk_post_init 00:04:04.429 LINK ioat_perf 00:04:04.429 LINK vtophys 00:04:04.429 LINK stub 00:04:04.429 LINK bdev_svc 00:04:04.429 LINK verify 00:04:04.429 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:04.429 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:04.429 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:04.688 LINK spdk_dd 00:04:04.688 LINK mem_callbacks 00:04:04.688 LINK spdk_trace 00:04:04.688 LINK pci_ut 00:04:04.948 LINK test_dma 00:04:04.948 CC test/event/reactor_perf/reactor_perf.o 00:04:04.948 CC test/event/reactor/reactor.o 00:04:04.948 LINK spdk_nvme_perf 00:04:04.948 CC test/event/event_perf/event_perf.o 00:04:04.948 CC examples/idxd/perf/perf.o 00:04:04.948 LINK nvme_fuzz 00:04:04.948 CC examples/vmd/led/led.o 00:04:04.948 LINK spdk_nvme_identify 00:04:04.948 CC examples/sock/hello_world/hello_sock.o 00:04:04.948 CC test/event/app_repeat/app_repeat.o 00:04:04.948 CC examples/vmd/lsvmd/lsvmd.o 00:04:04.948 CC test/event/scheduler/scheduler.o 00:04:04.948 CC examples/thread/thread/thread_ex.o 00:04:04.948 LINK spdk_top 00:04:04.948 LINK vhost_fuzz 00:04:04.948 LINK spdk_nvme 00:04:04.948 LINK spdk_bdev 00:04:05.207 LINK reactor_perf 00:04:05.207 LINK reactor 00:04:05.207 CC app/vhost/vhost.o 00:04:05.207 LINK event_perf 00:04:05.207 LINK led 00:04:05.207 LINK lsvmd 00:04:05.207 LINK memory_ut 00:04:05.207 LINK app_repeat 00:04:05.207 LINK hello_sock 00:04:05.207 LINK scheduler 00:04:05.207 LINK idxd_perf 00:04:05.207 LINK thread 00:04:05.207 LINK vhost 00:04:05.467 CC test/nvme/sgl/sgl.o 00:04:05.467 CC test/nvme/overhead/overhead.o 00:04:05.467 CC test/nvme/err_injection/err_injection.o 00:04:05.467 CC test/nvme/aer/aer.o 00:04:05.467 CC test/nvme/simple_copy/simple_copy.o 00:04:05.467 CC test/nvme/reserve/reserve.o 00:04:05.467 CC test/nvme/boot_partition/boot_partition.o 00:04:05.467 CC test/nvme/reset/reset.o 00:04:05.467 CC test/nvme/cuse/cuse.o 00:04:05.467 CC test/nvme/startup/startup.o 00:04:05.467 CC test/nvme/fdp/fdp.o 00:04:05.467 CC test/nvme/connect_stress/connect_stress.o 00:04:05.467 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.467 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.467 CC test/nvme/compliance/nvme_compliance.o 00:04:05.467 CC test/nvme/e2edp/nvme_dp.o 00:04:05.467 CC test/accel/dif/dif.o 00:04:05.467 CC test/blobfs/mkfs/mkfs.o 00:04:05.467 CC test/lvol/esnap/esnap.o 00:04:05.735 LINK boot_partition 00:04:05.735 LINK err_injection 00:04:05.735 LINK startup 00:04:05.735 LINK connect_stress 00:04:05.735 LINK reserve 00:04:05.735 LINK doorbell_aers 00:04:05.735 LINK fused_ordering 00:04:05.735 LINK simple_copy 00:04:05.735 LINK sgl 00:04:05.735 LINK aer 00:04:05.735 LINK reset 00:04:05.735 LINK mkfs 00:04:05.735 LINK overhead 00:04:05.735 LINK nvme_dp 00:04:05.735 CC examples/nvme/hello_world/hello_world.o 00:04:05.735 CC examples/nvme/reconnect/reconnect.o 00:04:05.735 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:05.735 LINK fdp 00:04:05.735 CC examples/nvme/arbitration/arbitration.o 00:04:05.735 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:05.735 CC examples/nvme/hotplug/hotplug.o 00:04:05.735 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:05.735 LINK nvme_compliance 00:04:05.735 CC examples/nvme/abort/abort.o 00:04:05.735 CC examples/accel/perf/accel_perf.o 00:04:05.735 CC examples/blob/hello_world/hello_blob.o 00:04:05.735 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:05.735 CC examples/blob/cli/blobcli.o 00:04:06.023 LINK cmb_copy 00:04:06.023 LINK pmr_persistence 00:04:06.023 LINK hello_world 00:04:06.023 LINK hotplug 00:04:06.023 LINK iscsi_fuzz 00:04:06.023 LINK dif 00:04:06.023 LINK arbitration 00:04:06.023 LINK reconnect 00:04:06.023 LINK abort 00:04:06.023 LINK hello_blob 00:04:06.023 LINK hello_fsdev 00:04:06.309 LINK nvme_manage 00:04:06.309 LINK accel_perf 00:04:06.309 LINK blobcli 00:04:06.578 LINK cuse 00:04:06.578 CC test/bdev/bdevio/bdevio.o 00:04:06.851 CC examples/bdev/hello_world/hello_bdev.o 00:04:06.851 CC examples/bdev/bdevperf/bdevperf.o 00:04:06.851 LINK bdevio 00:04:06.851 LINK hello_bdev 00:04:07.474 LINK bdevperf 00:04:07.761 CC examples/nvmf/nvmf/nvmf.o 00:04:08.041 LINK nvmf 00:04:08.981 LINK esnap 00:04:09.550 00:04:09.550 real 0m55.329s 00:04:09.550 user 6m48.420s 00:04:09.550 sys 2m58.011s 00:04:09.550 22:09:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:09.550 22:09:58 make -- common/autotest_common.sh@10 -- $ set +x 00:04:09.550 ************************************ 00:04:09.550 END TEST make 00:04:09.550 ************************************ 00:04:09.550 22:09:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:09.550 22:09:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:09.550 22:09:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:09.550 22:09:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.550 22:09:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:09.550 22:09:58 -- pm/common@44 -- $ pid=7592 00:04:09.550 22:09:58 -- pm/common@50 -- $ kill -TERM 7592 00:04:09.550 22:09:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.550 22:09:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:09.550 22:09:58 -- pm/common@44 -- $ pid=7594 00:04:09.550 22:09:58 -- pm/common@50 -- $ kill -TERM 7594 00:04:09.550 22:09:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.550 22:09:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:09.550 22:09:58 -- pm/common@44 -- $ pid=7596 00:04:09.550 22:09:58 -- pm/common@50 -- $ kill -TERM 7596 00:04:09.550 22:09:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.551 22:09:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:09.551 22:09:58 -- pm/common@44 -- $ pid=7619 00:04:09.551 22:09:58 -- pm/common@50 -- $ sudo -E kill -TERM 7619 00:04:09.551 22:09:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:09.551 22:09:59 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:09.551 22:09:59 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.551 22:09:59 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.551 22:09:59 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.551 22:09:59 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.551 22:09:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.551 22:09:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.551 22:09:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.551 22:09:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.551 22:09:59 -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.551 22:09:59 -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.551 22:09:59 -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.551 22:09:59 -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.551 22:09:59 -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.551 22:09:59 -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.551 22:09:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.551 22:09:59 -- scripts/common.sh@344 -- # case "$op" in 00:04:09.551 22:09:59 -- scripts/common.sh@345 -- # : 1 00:04:09.551 22:09:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.551 22:09:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.551 22:09:59 -- scripts/common.sh@365 -- # decimal 1 00:04:09.551 22:09:59 -- scripts/common.sh@353 -- # local d=1 00:04:09.551 22:09:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.551 22:09:59 -- scripts/common.sh@355 -- # echo 1 00:04:09.551 22:09:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.551 22:09:59 -- scripts/common.sh@366 -- # decimal 2 00:04:09.551 22:09:59 -- scripts/common.sh@353 -- # local d=2 00:04:09.551 22:09:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.551 22:09:59 -- scripts/common.sh@355 -- # echo 2 00:04:09.551 22:09:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.551 22:09:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.551 22:09:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.551 22:09:59 -- scripts/common.sh@368 -- # return 0 00:04:09.551 22:09:59 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.551 22:09:59 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.551 --rc genhtml_branch_coverage=1 00:04:09.551 --rc genhtml_function_coverage=1 00:04:09.551 --rc genhtml_legend=1 00:04:09.551 --rc geninfo_all_blocks=1 00:04:09.551 --rc geninfo_unexecuted_blocks=1 00:04:09.551 00:04:09.551 ' 00:04:09.551 22:09:59 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.551 --rc genhtml_branch_coverage=1 00:04:09.551 --rc genhtml_function_coverage=1 00:04:09.551 --rc genhtml_legend=1 00:04:09.551 --rc geninfo_all_blocks=1 00:04:09.551 --rc geninfo_unexecuted_blocks=1 00:04:09.551 00:04:09.551 ' 00:04:09.551 22:09:59 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.551 --rc genhtml_branch_coverage=1 00:04:09.551 --rc genhtml_function_coverage=1 00:04:09.551 --rc genhtml_legend=1 00:04:09.551 --rc geninfo_all_blocks=1 00:04:09.551 --rc geninfo_unexecuted_blocks=1 00:04:09.551 00:04:09.551 ' 00:04:09.551 22:09:59 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.551 --rc genhtml_branch_coverage=1 00:04:09.551 --rc genhtml_function_coverage=1 00:04:09.551 --rc genhtml_legend=1 00:04:09.551 --rc geninfo_all_blocks=1 00:04:09.551 --rc geninfo_unexecuted_blocks=1 00:04:09.551 00:04:09.551 ' 00:04:09.551 22:09:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.551 22:09:59 -- nvmf/common.sh@7 -- # uname -s 00:04:09.551 22:09:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.551 22:09:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.551 22:09:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.551 22:09:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.551 22:09:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.551 22:09:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.551 22:09:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.551 22:09:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.551 22:09:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.551 22:09:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.551 22:09:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:09.551 22:09:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:09.811 22:09:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.811 22:09:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.811 22:09:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:09.811 22:09:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:09.811 22:09:59 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.811 22:09:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:09.811 22:09:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.811 22:09:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.811 22:09:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.811 22:09:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.811 22:09:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.811 22:09:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.811 22:09:59 -- paths/export.sh@5 -- # export PATH 00:04:09.811 22:09:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.811 22:09:59 -- nvmf/common.sh@51 -- # : 0 00:04:09.811 22:09:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:09.811 22:09:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:09.811 22:09:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:09.811 22:09:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.811 22:09:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.811 22:09:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:09.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:09.811 22:09:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:09.811 22:09:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:09.811 22:09:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:09.811 22:09:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:09.811 22:09:59 -- spdk/autotest.sh@32 -- # uname -s 00:04:09.811 22:09:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:09.811 22:09:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:09.811 22:09:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:09.811 22:09:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:09.811 22:09:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:09.811 22:09:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:09.811 22:09:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:09.811 22:09:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:09.811 22:09:59 -- spdk/autotest.sh@48 -- # udevadm_pid=88025 00:04:09.811 22:09:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:09.811 22:09:59 -- pm/common@17 -- # local monitor 00:04:09.811 22:09:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:09.811 22:09:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.811 22:09:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.811 22:09:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.811 22:09:59 -- pm/common@21 -- # date +%s 00:04:09.811 22:09:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.811 22:09:59 -- pm/common@21 -- # date +%s 00:04:09.811 22:09:59 -- pm/common@25 -- # sleep 1 00:04:09.811 22:09:59 -- pm/common@21 -- # date +%s 00:04:09.811 22:09:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734383399 00:04:09.811 22:09:59 -- pm/common@21 -- # date +%s 00:04:09.811 22:09:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734383399 00:04:09.811 22:09:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734383399 00:04:09.811 22:09:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734383399 00:04:09.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734383399_collect-cpu-load.pm.log 00:04:09.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734383399_collect-vmstat.pm.log 00:04:09.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734383399_collect-cpu-temp.pm.log 00:04:09.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734383399_collect-bmc-pm.bmc.pm.log 00:04:10.751 22:10:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:10.751 22:10:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:10.751 22:10:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.751 22:10:00 -- common/autotest_common.sh@10 -- # set +x 00:04:10.751 22:10:00 -- spdk/autotest.sh@59 -- # create_test_list 00:04:10.751 22:10:00 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:10.751 22:10:00 -- common/autotest_common.sh@10 -- # set +x 00:04:10.751 22:10:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:10.751 22:10:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.751 22:10:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.751 22:10:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:10.751 22:10:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.751 22:10:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:10.751 22:10:00 -- common/autotest_common.sh@1457 -- # uname 00:04:10.751 22:10:00 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:10.751 22:10:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:10.751 22:10:00 -- common/autotest_common.sh@1477 -- # uname 00:04:10.751 22:10:00 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:10.751 22:10:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:10.751 22:10:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:11.010 lcov: LCOV version 1.15 00:04:11.010 22:10:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:29.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:29.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:35.677 22:10:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:35.677 22:10:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.677 22:10:25 -- common/autotest_common.sh@10 -- # set +x 00:04:35.677 22:10:25 -- spdk/autotest.sh@78 -- # rm -f 00:04:35.677 22:10:25 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.970 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:38.970 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:38.970 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:38.970 22:10:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:38.970 22:10:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:38.970 22:10:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:38.970 22:10:28 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:38.970 22:10:28 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:38.970 22:10:28 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:38.970 22:10:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:38.970 22:10:28 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:38.970 22:10:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:38.970 22:10:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:38.970 22:10:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:38.970 22:10:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.970 22:10:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:38.970 22:10:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:38.970 22:10:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.970 22:10:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:38.970 22:10:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:38.970 22:10:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:38.970 22:10:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:38.970 No valid GPT data, bailing 00:04:38.970 22:10:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.970 22:10:28 -- scripts/common.sh@394 -- # pt= 00:04:38.970 22:10:28 -- scripts/common.sh@395 -- # return 1 00:04:38.970 22:10:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:38.970 1+0 records in 00:04:38.970 1+0 records out 00:04:38.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00159734 s, 656 MB/s 00:04:38.970 22:10:28 -- spdk/autotest.sh@105 -- # sync 00:04:38.970 22:10:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:38.970 22:10:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:38.970 22:10:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:44.248 22:10:33 -- spdk/autotest.sh@111 -- # uname -s 00:04:44.248 22:10:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:44.248 22:10:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:44.248 22:10:33 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:47.543 Hugepages 00:04:47.543 node hugesize free / total 00:04:47.543 node0 1048576kB 0 / 0 00:04:47.543 node0 2048kB 0 / 0 00:04:47.543 node1 1048576kB 0 / 0 00:04:47.543 node1 2048kB 0 / 0 00:04:47.543 00:04:47.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.543 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:47.543 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:47.543 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:47.543 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:47.543 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:47.543 22:10:36 -- spdk/autotest.sh@117 -- # uname -s 00:04:47.543 22:10:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:47.543 22:10:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:47.543 22:10:36 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.083 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:50.083 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:50.083 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:50.083 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:50.083 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:50.084 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.023 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.023 22:10:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:51.962 22:10:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:51.962 22:10:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:51.962 22:10:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.962 22:10:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:51.962 22:10:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:51.962 22:10:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:51.962 22:10:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.962 22:10:41 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:51.962 22:10:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.223 22:10:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:52.223 22:10:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:52.223 22:10:41 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.763 Waiting for block devices as requested 00:04:54.763 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:55.023 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.023 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:55.283 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:55.283 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:55.283 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:55.283 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:55.543 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:55.543 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.543 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.804 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:55.804 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:55.804 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:56.070 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:56.070 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:56.070 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:56.070 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:56.331 22:10:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:56.331 22:10:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:56.331 22:10:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:56.331 22:10:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:56.331 22:10:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:56.331 22:10:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:56.331 22:10:45 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:56.331 22:10:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:56.331 22:10:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:56.331 22:10:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:56.331 22:10:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:56.331 22:10:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:56.331 22:10:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:56.331 22:10:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:56.331 22:10:45 -- common/autotest_common.sh@1543 -- # continue 00:04:56.331 22:10:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:56.331 22:10:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.331 22:10:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.331 22:10:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:56.331 22:10:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.331 22:10:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.331 22:10:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.626 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.626 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.196 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.196 22:10:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:00.196 22:10:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.196 22:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.196 22:10:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:00.196 22:10:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:00.196 22:10:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.196 22:10:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:00.196 22:10:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:00.196 22:10:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:00.196 22:10:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:00.196 22:10:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:00.196 22:10:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:00.196 22:10:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:00.196 22:10:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.196 22:10:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.196 22:10:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:00.196 22:10:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:00.196 22:10:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:00.196 22:10:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:00.196 22:10:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:00.196 22:10:49 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:00.196 22:10:49 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:00.196 22:10:49 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:00.196 22:10:49 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:00.196 22:10:49 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:00.196 22:10:49 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:00.196 22:10:49 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=102179 00:05:00.197 22:10:49 -- common/autotest_common.sh@1585 -- # waitforlisten 102179 00:05:00.197 22:10:49 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.197 22:10:49 -- common/autotest_common.sh@835 -- # '[' -z 102179 ']' 00:05:00.197 22:10:49 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.197 22:10:49 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.197 22:10:49 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.197 22:10:49 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.197 22:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.456 [2024-12-16 22:10:49.931780] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:00.456 [2024-12-16 22:10:49.931825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102179 ] 00:05:00.456 [2024-12-16 22:10:50.004701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.456 [2024-12-16 22:10:50.028146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.715 22:10:50 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.715 22:10:50 -- common/autotest_common.sh@868 -- # return 0 00:05:00.715 22:10:50 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:00.715 22:10:50 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:00.715 22:10:50 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:04.007 nvme0n1 00:05:04.007 22:10:53 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:04.007 [2024-12-16 22:10:53.426058] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:04.007 [2024-12-16 22:10:53.426088] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:04.007 request: 00:05:04.007 { 00:05:04.007 "nvme_ctrlr_name": "nvme0", 00:05:04.007 "password": "test", 00:05:04.007 "method": "bdev_nvme_opal_revert", 00:05:04.007 "req_id": 1 00:05:04.007 } 00:05:04.007 Got JSON-RPC error response 00:05:04.007 response: 00:05:04.007 { 00:05:04.007 "code": -32603, 00:05:04.007 "message": "Internal error" 00:05:04.007 } 00:05:04.007 22:10:53 -- common/autotest_common.sh@1591 -- # true 00:05:04.007 22:10:53 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:04.007 22:10:53 -- common/autotest_common.sh@1595 -- # killprocess 102179 00:05:04.007 22:10:53 -- common/autotest_common.sh@954 -- # '[' -z 102179 ']' 00:05:04.007 22:10:53 -- common/autotest_common.sh@958 -- # kill -0 102179 00:05:04.007 22:10:53 -- common/autotest_common.sh@959 -- # uname 00:05:04.007 22:10:53 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.007 22:10:53 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102179 00:05:04.007 22:10:53 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.007 22:10:53 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.007 22:10:53 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102179' 00:05:04.007 killing process with pid 102179 00:05:04.007 22:10:53 -- common/autotest_common.sh@973 -- # kill 102179 00:05:04.007 22:10:53 -- common/autotest_common.sh@978 -- # wait 102179 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.007 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:04.008 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:05.916 22:10:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:05.916 22:10:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:05.916 22:10:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.916 22:10:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.916 22:10:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:05.916 22:10:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.916 22:10:55 -- common/autotest_common.sh@10 -- # set +x 00:05:05.916 22:10:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:05.916 22:10:55 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.916 22:10:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.916 22:10:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.916 22:10:55 -- common/autotest_common.sh@10 -- # set +x 00:05:05.916 ************************************ 00:05:05.916 START TEST env 00:05:05.916 ************************************ 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.916 * Looking for test storage... 00:05:05.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.916 22:10:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.916 22:10:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.916 22:10:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.916 22:10:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.916 22:10:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.916 22:10:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.916 22:10:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.916 22:10:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.916 22:10:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.916 22:10:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.916 22:10:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.916 22:10:55 env -- scripts/common.sh@344 -- # case "$op" in 00:05:05.916 22:10:55 env -- scripts/common.sh@345 -- # : 1 00:05:05.916 22:10:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.916 22:10:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.916 22:10:55 env -- scripts/common.sh@365 -- # decimal 1 00:05:05.916 22:10:55 env -- scripts/common.sh@353 -- # local d=1 00:05:05.916 22:10:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.916 22:10:55 env -- scripts/common.sh@355 -- # echo 1 00:05:05.916 22:10:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.916 22:10:55 env -- scripts/common.sh@366 -- # decimal 2 00:05:05.916 22:10:55 env -- scripts/common.sh@353 -- # local d=2 00:05:05.916 22:10:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.916 22:10:55 env -- scripts/common.sh@355 -- # echo 2 00:05:05.916 22:10:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.916 22:10:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.916 22:10:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.916 22:10:55 env -- scripts/common.sh@368 -- # return 0 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.916 --rc genhtml_branch_coverage=1 00:05:05.916 --rc genhtml_function_coverage=1 00:05:05.916 --rc genhtml_legend=1 00:05:05.916 --rc geninfo_all_blocks=1 00:05:05.916 --rc geninfo_unexecuted_blocks=1 00:05:05.916 00:05:05.916 ' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.916 --rc genhtml_branch_coverage=1 00:05:05.916 --rc genhtml_function_coverage=1 00:05:05.916 --rc genhtml_legend=1 00:05:05.916 --rc geninfo_all_blocks=1 00:05:05.916 --rc geninfo_unexecuted_blocks=1 00:05:05.916 00:05:05.916 ' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.916 --rc genhtml_branch_coverage=1 00:05:05.916 --rc genhtml_function_coverage=1 00:05:05.916 --rc genhtml_legend=1 00:05:05.916 --rc geninfo_all_blocks=1 00:05:05.916 --rc geninfo_unexecuted_blocks=1 00:05:05.916 00:05:05.916 ' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.916 --rc genhtml_branch_coverage=1 00:05:05.916 --rc genhtml_function_coverage=1 00:05:05.916 --rc genhtml_legend=1 00:05:05.916 --rc geninfo_all_blocks=1 00:05:05.916 --rc geninfo_unexecuted_blocks=1 00:05:05.916 00:05:05.916 ' 00:05:05.916 22:10:55 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.916 22:10:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.916 ************************************ 00:05:05.916 START TEST env_memory 00:05:05.916 ************************************ 00:05:05.916 22:10:55 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.916 00:05:05.916 00:05:05.916 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.916 http://cunit.sourceforge.net/ 00:05:05.916 00:05:05.916 00:05:05.916 Suite: memory 00:05:05.916 Test: alloc and free memory map ...[2024-12-16 22:10:55.392029] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.916 passed 00:05:05.916 Test: mem map translation ...[2024-12-16 22:10:55.410723] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.916 [2024-12-16 22:10:55.410740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.916 [2024-12-16 22:10:55.410775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.916 [2024-12-16 22:10:55.410781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.916 passed 00:05:05.916 Test: mem map registration ...[2024-12-16 22:10:55.447302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:05.916 [2024-12-16 22:10:55.447326] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:05.916 passed 00:05:05.916 Test: mem map adjacent registrations ...passed 00:05:05.916 00:05:05.916 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.916 suites 1 1 n/a 0 0 00:05:05.916 tests 4 4 4 0 0 00:05:05.916 asserts 152 152 152 0 n/a 00:05:05.916 00:05:05.916 Elapsed time = 0.123 seconds 00:05:05.916 00:05:05.916 real 0m0.132s 00:05:05.916 user 0m0.122s 00:05:05.916 sys 0m0.010s 00:05:05.916 22:10:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.916 22:10:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.916 ************************************ 00:05:05.916 END TEST env_memory 00:05:05.916 ************************************ 00:05:05.916 22:10:55 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.916 22:10:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.916 22:10:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.916 ************************************ 00:05:05.916 START TEST env_vtophys 00:05:05.916 ************************************ 00:05:05.916 22:10:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.916 EAL: lib.eal log level changed from notice to debug 00:05:05.916 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.916 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.916 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.916 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.916 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.916 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.916 EAL: Detected lcore 6 as core 6 on socket 0 00:05:05.916 EAL: Detected lcore 7 as core 8 on socket 0 00:05:05.916 EAL: Detected lcore 8 as core 9 on socket 0 00:05:05.916 EAL: Detected lcore 9 as core 10 on socket 0 00:05:05.916 EAL: Detected lcore 10 as core 11 on socket 0 00:05:05.916 EAL: Detected lcore 11 as core 12 on socket 0 00:05:05.916 EAL: Detected lcore 12 as core 13 on socket 0 00:05:05.916 EAL: Detected lcore 13 as core 16 on socket 0 00:05:05.916 EAL: Detected lcore 14 as core 17 on socket 0 00:05:05.916 EAL: Detected lcore 15 as core 18 on socket 0 00:05:05.916 EAL: Detected lcore 16 as core 19 on socket 0 00:05:05.916 EAL: Detected lcore 17 as core 20 on socket 0 00:05:05.916 EAL: Detected lcore 18 as core 21 on socket 0 00:05:05.916 EAL: Detected lcore 19 as core 25 on socket 0 00:05:05.916 EAL: Detected lcore 20 as core 26 on socket 0 00:05:05.916 EAL: Detected lcore 21 as core 27 on socket 0 00:05:05.916 EAL: Detected lcore 22 as core 28 on socket 0 00:05:05.916 EAL: Detected lcore 23 as core 29 on socket 0 00:05:05.916 EAL: Detected lcore 24 as core 0 on socket 1 00:05:05.916 EAL: Detected lcore 25 as core 1 on socket 1 00:05:05.916 EAL: Detected lcore 26 as core 2 on socket 1 00:05:05.916 EAL: Detected lcore 27 as core 3 on socket 1 00:05:05.916 EAL: Detected lcore 28 as core 4 on socket 1 00:05:05.916 EAL: Detected lcore 29 as core 5 on socket 1 00:05:05.916 EAL: Detected lcore 30 as core 6 on socket 1 00:05:05.916 EAL: Detected lcore 31 as core 8 on socket 1 00:05:05.916 EAL: Detected lcore 32 as core 9 on socket 1 00:05:05.916 EAL: Detected lcore 33 as core 10 on socket 1 00:05:05.917 EAL: Detected lcore 34 as core 11 on socket 1 00:05:05.917 EAL: Detected lcore 35 as core 12 on socket 1 00:05:05.917 EAL: Detected lcore 36 as core 13 on socket 1 00:05:05.917 EAL: Detected lcore 37 as core 16 on socket 1 00:05:05.917 EAL: Detected lcore 38 as core 17 on socket 1 00:05:05.917 EAL: Detected lcore 39 as core 18 on socket 1 00:05:05.917 EAL: Detected lcore 40 as core 19 on socket 1 00:05:05.917 EAL: Detected lcore 41 as core 20 on socket 1 00:05:05.917 EAL: Detected lcore 42 as core 21 on socket 1 00:05:05.917 EAL: Detected lcore 43 as core 25 on socket 1 00:05:05.917 EAL: Detected lcore 44 as core 26 on socket 1 00:05:05.917 EAL: Detected lcore 45 as core 27 on socket 1 00:05:05.917 EAL: Detected lcore 46 as core 28 on socket 1 00:05:05.917 EAL: Detected lcore 47 as core 29 on socket 1 00:05:05.917 EAL: Detected lcore 48 as core 0 on socket 0 00:05:05.917 EAL: Detected lcore 49 as core 1 on socket 0 00:05:05.917 EAL: Detected lcore 50 as core 2 on socket 0 00:05:05.917 EAL: Detected lcore 51 as core 3 on socket 0 00:05:05.917 EAL: Detected lcore 52 as core 4 on socket 0 00:05:05.917 EAL: Detected lcore 53 as core 5 on socket 0 00:05:05.917 EAL: Detected lcore 54 as core 6 on socket 0 00:05:05.917 EAL: Detected lcore 55 as core 8 on socket 0 00:05:05.917 EAL: Detected lcore 56 as core 9 on socket 0 00:05:05.917 EAL: Detected lcore 57 as core 10 on socket 0 00:05:05.917 EAL: Detected lcore 58 as core 11 on socket 0 00:05:05.917 EAL: Detected lcore 59 as core 12 on socket 0 00:05:05.917 EAL: Detected lcore 60 as core 13 on socket 0 00:05:05.917 EAL: Detected lcore 61 as core 16 on socket 0 00:05:05.917 EAL: Detected lcore 62 as core 17 on socket 0 00:05:05.917 EAL: Detected lcore 63 as core 18 on socket 0 00:05:05.917 EAL: Detected lcore 64 as core 19 on socket 0 00:05:05.917 EAL: Detected lcore 65 as core 20 on socket 0 00:05:05.917 EAL: Detected lcore 66 as core 21 on socket 0 00:05:05.917 EAL: Detected lcore 67 as core 25 on socket 0 00:05:05.917 EAL: Detected lcore 68 as core 26 on socket 0 00:05:05.917 EAL: Detected lcore 69 as core 27 on socket 0 00:05:05.917 EAL: Detected lcore 70 as core 28 on socket 0 00:05:05.917 EAL: Detected lcore 71 as core 29 on socket 0 00:05:05.917 EAL: Detected lcore 72 as core 0 on socket 1 00:05:05.917 EAL: Detected lcore 73 as core 1 on socket 1 00:05:05.917 EAL: Detected lcore 74 as core 2 on socket 1 00:05:05.917 EAL: Detected lcore 75 as core 3 on socket 1 00:05:05.917 EAL: Detected lcore 76 as core 4 on socket 1 00:05:05.917 EAL: Detected lcore 77 as core 5 on socket 1 00:05:05.917 EAL: Detected lcore 78 as core 6 on socket 1 00:05:05.917 EAL: Detected lcore 79 as core 8 on socket 1 00:05:05.917 EAL: Detected lcore 80 as core 9 on socket 1 00:05:05.917 EAL: Detected lcore 81 as core 10 on socket 1 00:05:05.917 EAL: Detected lcore 82 as core 11 on socket 1 00:05:05.917 EAL: Detected lcore 83 as core 12 on socket 1 00:05:05.917 EAL: Detected lcore 84 as core 13 on socket 1 00:05:05.917 EAL: Detected lcore 85 as core 16 on socket 1 00:05:05.917 EAL: Detected lcore 86 as core 17 on socket 1 00:05:05.917 EAL: Detected lcore 87 as core 18 on socket 1 00:05:05.917 EAL: Detected lcore 88 as core 19 on socket 1 00:05:05.917 EAL: Detected lcore 89 as core 20 on socket 1 00:05:05.917 EAL: Detected lcore 90 as core 21 on socket 1 00:05:05.917 EAL: Detected lcore 91 as core 25 on socket 1 00:05:05.917 EAL: Detected lcore 92 as core 26 on socket 1 00:05:05.917 EAL: Detected lcore 93 as core 27 on socket 1 00:05:05.917 EAL: Detected lcore 94 as core 28 on socket 1 00:05:05.917 EAL: Detected lcore 95 as core 29 on socket 1 00:05:05.917 EAL: Maximum logical cores by configuration: 128 00:05:05.917 EAL: Detected CPU lcores: 96 00:05:05.917 EAL: Detected NUMA nodes: 2 00:05:05.917 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:05.917 EAL: Detected shared linkage of DPDK 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:05.917 EAL: Registered [vdev] bus. 00:05:05.917 EAL: bus.vdev log level changed from disabled to notice 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:05.917 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.917 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:05.917 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:05.917 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.917 EAL: No shared files mode enabled, IPC is disabled 00:05:05.917 EAL: Bus pci wants IOVA as 'DC' 00:05:05.917 EAL: Bus vdev wants IOVA as 'DC' 00:05:05.917 EAL: Buses did not request a specific IOVA mode. 00:05:05.917 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:05.917 EAL: Selected IOVA mode 'VA' 00:05:05.917 EAL: Probing VFIO support... 00:05:05.917 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.917 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.917 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.917 EAL: VFIO support initialized 00:05:05.917 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.917 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.917 EAL: Setting up physically contiguous memory... 00:05:05.917 EAL: Setting maximum number of open files to 524288 00:05:05.917 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.917 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.917 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.917 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.917 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.917 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.917 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.917 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.917 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.917 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.917 EAL: Hugepages will be freed exactly as allocated. 00:05:05.917 EAL: No shared files mode enabled, IPC is disabled 00:05:05.917 EAL: No shared files mode enabled, IPC is disabled 00:05:05.917 EAL: TSC frequency is ~2100000 KHz 00:05:05.917 EAL: Main lcore 0 is ready (tid=7fe82fcfca00;cpuset=[0]) 00:05:05.917 EAL: Trying to obtain current memory policy. 00:05:05.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.917 EAL: Restoring previous memory policy: 0 00:05:05.917 EAL: request: mp_malloc_sync 00:05:05.917 EAL: No shared files mode enabled, IPC is disabled 00:05:05.917 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.917 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:05.917 EAL: probe driver: 8086:37d2 net_i40e 00:05:05.917 EAL: Not managed by a supported kernel driver, skipped 00:05:05.917 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:05.917 EAL: probe driver: 8086:37d2 net_i40e 00:05:05.917 EAL: Not managed by a supported kernel driver, skipped 00:05:05.917 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.178 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.178 00:05:06.178 00:05:06.178 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.178 http://cunit.sourceforge.net/ 00:05:06.178 00:05:06.178 00:05:06.178 Suite: components_suite 00:05:06.178 Test: vtophys_malloc_test ...passed 00:05:06.178 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.178 EAL: Trying to obtain current memory policy. 00:05:06.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.178 EAL: Restoring previous memory policy: 4 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.178 EAL: request: mp_malloc_sync 00:05:06.178 EAL: No shared files mode enabled, IPC is disabled 00:05:06.178 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.178 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.438 EAL: request: mp_malloc_sync 00:05:06.438 EAL: No shared files mode enabled, IPC is disabled 00:05:06.438 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.438 EAL: Trying to obtain current memory policy. 00:05:06.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.438 EAL: Restoring previous memory policy: 4 00:05:06.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.438 EAL: request: mp_malloc_sync 00:05:06.438 EAL: No shared files mode enabled, IPC is disabled 00:05:06.438 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.438 EAL: request: mp_malloc_sync 00:05:06.438 EAL: No shared files mode enabled, IPC is disabled 00:05:06.438 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.438 EAL: Trying to obtain current memory policy. 00:05:06.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.698 EAL: Restoring previous memory policy: 4 00:05:06.698 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.698 EAL: request: mp_malloc_sync 00:05:06.698 EAL: No shared files mode enabled, IPC is disabled 00:05:06.698 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.957 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.957 EAL: request: mp_malloc_sync 00:05:06.957 EAL: No shared files mode enabled, IPC is disabled 00:05:06.957 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.957 passed 00:05:06.957 00:05:06.957 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.957 suites 1 1 n/a 0 0 00:05:06.957 tests 2 2 2 0 0 00:05:06.957 asserts 497 497 497 0 n/a 00:05:06.957 00:05:06.957 Elapsed time = 0.961 seconds 00:05:06.957 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.957 EAL: request: mp_malloc_sync 00:05:06.958 EAL: No shared files mode enabled, IPC is disabled 00:05:06.958 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.958 EAL: No shared files mode enabled, IPC is disabled 00:05:06.958 EAL: No shared files mode enabled, IPC is disabled 00:05:06.958 EAL: No shared files mode enabled, IPC is disabled 00:05:06.958 00:05:06.958 real 0m1.082s 00:05:06.958 user 0m0.643s 00:05:06.958 sys 0m0.414s 00:05:06.958 22:10:56 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.958 22:10:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.958 ************************************ 00:05:06.958 END TEST env_vtophys 00:05:06.958 ************************************ 00:05:07.217 22:10:56 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.217 22:10:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.217 22:10:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.217 22:10:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.217 ************************************ 00:05:07.217 START TEST env_pci 00:05:07.217 ************************************ 00:05:07.217 22:10:56 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.217 00:05:07.217 00:05:07.217 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.217 http://cunit.sourceforge.net/ 00:05:07.217 00:05:07.217 00:05:07.217 Suite: pci 00:05:07.217 Test: pci_hook ...[2024-12-16 22:10:56.722504] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103443 has claimed it 00:05:07.217 EAL: Cannot find device (10000:00:01.0) 00:05:07.217 EAL: Failed to attach device on primary process 00:05:07.217 passed 00:05:07.217 00:05:07.217 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.217 suites 1 1 n/a 0 0 00:05:07.217 tests 1 1 1 0 0 00:05:07.217 asserts 25 25 25 0 n/a 00:05:07.217 00:05:07.217 Elapsed time = 0.029 seconds 00:05:07.217 00:05:07.217 real 0m0.046s 00:05:07.217 user 0m0.010s 00:05:07.217 sys 0m0.036s 00:05:07.217 22:10:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.217 22:10:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:07.217 ************************************ 00:05:07.217 END TEST env_pci 00:05:07.217 ************************************ 00:05:07.217 22:10:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.217 22:10:56 env -- env/env.sh@15 -- # uname 00:05:07.217 22:10:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.217 22:10:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.217 22:10:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.217 22:10:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:07.217 22:10:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.217 22:10:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.217 ************************************ 00:05:07.217 START TEST env_dpdk_post_init 00:05:07.217 ************************************ 00:05:07.217 22:10:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.217 EAL: Detected CPU lcores: 96 00:05:07.217 EAL: Detected NUMA nodes: 2 00:05:07.217 EAL: Detected shared linkage of DPDK 00:05:07.217 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.217 EAL: Selected IOVA mode 'VA' 00:05:07.217 EAL: VFIO support initialized 00:05:07.217 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.477 EAL: Using IOMMU type 1 (Type 1) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:07.477 EAL: Ignore mapping IO port bar(1) 00:05:07.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:08.416 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:08.416 EAL: Ignore mapping IO port bar(1) 00:05:08.416 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:08.416 EAL: Ignore mapping IO port bar(1) 00:05:08.416 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:08.416 EAL: Ignore mapping IO port bar(1) 00:05:08.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:08.417 EAL: Ignore mapping IO port bar(1) 00:05:08.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:08.417 EAL: Ignore mapping IO port bar(1) 00:05:08.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:08.417 EAL: Ignore mapping IO port bar(1) 00:05:08.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:08.417 EAL: Ignore mapping IO port bar(1) 00:05:08.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:08.417 EAL: Ignore mapping IO port bar(1) 00:05:08.417 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:11.718 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:11.718 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:11.718 Starting DPDK initialization... 00:05:11.718 Starting SPDK post initialization... 00:05:11.718 SPDK NVMe probe 00:05:11.718 Attaching to 0000:5e:00.0 00:05:11.718 Attached to 0000:5e:00.0 00:05:11.718 Cleaning up... 00:05:11.718 00:05:11.718 real 0m4.313s 00:05:11.718 user 0m3.214s 00:05:11.718 sys 0m0.170s 00:05:11.718 22:11:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.718 22:11:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.718 ************************************ 00:05:11.718 END TEST env_dpdk_post_init 00:05:11.718 ************************************ 00:05:11.718 22:11:01 env -- env/env.sh@26 -- # uname 00:05:11.718 22:11:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.718 22:11:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.718 22:11:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.718 22:11:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.718 22:11:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.718 ************************************ 00:05:11.718 START TEST env_mem_callbacks 00:05:11.718 ************************************ 00:05:11.718 22:11:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.718 EAL: Detected CPU lcores: 96 00:05:11.718 EAL: Detected NUMA nodes: 2 00:05:11.718 EAL: Detected shared linkage of DPDK 00:05:11.718 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.718 EAL: Selected IOVA mode 'VA' 00:05:11.718 EAL: VFIO support initialized 00:05:11.718 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.718 00:05:11.718 00:05:11.718 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.718 http://cunit.sourceforge.net/ 00:05:11.718 00:05:11.718 00:05:11.718 Suite: memory 00:05:11.718 Test: test ... 00:05:11.718 register 0x200000200000 2097152 00:05:11.718 malloc 3145728 00:05:11.718 register 0x200000400000 4194304 00:05:11.718 buf 0x200000500000 len 3145728 PASSED 00:05:11.718 malloc 64 00:05:11.718 buf 0x2000004fff40 len 64 PASSED 00:05:11.718 malloc 4194304 00:05:11.718 register 0x200000800000 6291456 00:05:11.718 buf 0x200000a00000 len 4194304 PASSED 00:05:11.718 free 0x200000500000 3145728 00:05:11.718 free 0x2000004fff40 64 00:05:11.718 unregister 0x200000400000 4194304 PASSED 00:05:11.718 free 0x200000a00000 4194304 00:05:11.718 unregister 0x200000800000 6291456 PASSED 00:05:11.718 malloc 8388608 00:05:11.718 register 0x200000400000 10485760 00:05:11.718 buf 0x200000600000 len 8388608 PASSED 00:05:11.718 free 0x200000600000 8388608 00:05:11.718 unregister 0x200000400000 10485760 PASSED 00:05:11.718 passed 00:05:11.718 00:05:11.718 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.718 suites 1 1 n/a 0 0 00:05:11.718 tests 1 1 1 0 0 00:05:11.718 asserts 15 15 15 0 n/a 00:05:11.718 00:05:11.718 Elapsed time = 0.008 seconds 00:05:11.718 00:05:11.718 real 0m0.058s 00:05:11.718 user 0m0.020s 00:05:11.718 sys 0m0.038s 00:05:11.718 22:11:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.718 22:11:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.718 ************************************ 00:05:11.718 END TEST env_mem_callbacks 00:05:11.718 ************************************ 00:05:11.718 00:05:11.718 real 0m6.169s 00:05:11.718 user 0m4.251s 00:05:11.718 sys 0m0.999s 00:05:11.718 22:11:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.718 22:11:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.718 ************************************ 00:05:11.718 END TEST env 00:05:11.718 ************************************ 00:05:11.718 22:11:01 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.718 22:11:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.718 22:11:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.718 22:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:11.718 ************************************ 00:05:11.718 START TEST rpc 00:05:11.718 ************************************ 00:05:11.719 22:11:01 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.978 * Looking for test storage... 00:05:11.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.978 22:11:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.978 22:11:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.978 22:11:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.978 22:11:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.978 22:11:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.978 22:11:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.978 22:11:01 rpc -- scripts/common.sh@345 -- # : 1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.978 22:11:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.978 22:11:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.978 22:11:01 rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.978 22:11:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.978 22:11:01 rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.978 22:11:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.978 22:11:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.978 22:11:01 rpc -- scripts/common.sh@368 -- # return 0 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.978 --rc genhtml_branch_coverage=1 00:05:11.978 --rc genhtml_function_coverage=1 00:05:11.978 --rc genhtml_legend=1 00:05:11.978 --rc geninfo_all_blocks=1 00:05:11.978 --rc geninfo_unexecuted_blocks=1 00:05:11.978 00:05:11.978 ' 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.978 --rc genhtml_branch_coverage=1 00:05:11.978 --rc genhtml_function_coverage=1 00:05:11.978 --rc genhtml_legend=1 00:05:11.978 --rc geninfo_all_blocks=1 00:05:11.978 --rc geninfo_unexecuted_blocks=1 00:05:11.978 00:05:11.978 ' 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.978 --rc genhtml_branch_coverage=1 00:05:11.978 --rc genhtml_function_coverage=1 00:05:11.978 --rc genhtml_legend=1 00:05:11.978 --rc geninfo_all_blocks=1 00:05:11.978 --rc geninfo_unexecuted_blocks=1 00:05:11.978 00:05:11.978 ' 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.978 --rc genhtml_branch_coverage=1 00:05:11.978 --rc genhtml_function_coverage=1 00:05:11.978 --rc genhtml_legend=1 00:05:11.978 --rc geninfo_all_blocks=1 00:05:11.978 --rc geninfo_unexecuted_blocks=1 00:05:11.978 00:05:11.978 ' 00:05:11.978 22:11:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104265 00:05:11.978 22:11:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:11.978 22:11:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.978 22:11:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104265 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@835 -- # '[' -z 104265 ']' 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.978 22:11:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.978 [2024-12-16 22:11:01.610567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:11.978 [2024-12-16 22:11:01.610610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104265 ] 00:05:12.238 [2024-12-16 22:11:01.683071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.238 [2024-12-16 22:11:01.704980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.238 [2024-12-16 22:11:01.705018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104265' to capture a snapshot of events at runtime. 00:05:12.238 [2024-12-16 22:11:01.705026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.238 [2024-12-16 22:11:01.705033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.238 [2024-12-16 22:11:01.705038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104265 for offline analysis/debug. 00:05:12.238 [2024-12-16 22:11:01.705513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.238 22:11:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.238 22:11:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.238 22:11:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.238 22:11:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.238 22:11:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.238 22:11:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.238 22:11:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.238 22:11:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.238 22:11:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 ************************************ 00:05:12.498 START TEST rpc_integrity 00:05:12.498 ************************************ 00:05:12.498 22:11:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:12.498 22:11:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.498 22:11:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 22:11:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.498 22:11:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.498 22:11:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.498 22:11:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.498 22:11:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.498 22:11:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.498 { 00:05:12.498 "name": "Malloc0", 00:05:12.498 "aliases": [ 00:05:12.498 "96343835-8a3b-4c55-af3c-2c19d614111e" 00:05:12.498 ], 00:05:12.498 "product_name": "Malloc disk", 00:05:12.498 "block_size": 512, 00:05:12.498 "num_blocks": 16384, 00:05:12.498 "uuid": "96343835-8a3b-4c55-af3c-2c19d614111e", 00:05:12.498 "assigned_rate_limits": { 00:05:12.498 "rw_ios_per_sec": 0, 00:05:12.498 "rw_mbytes_per_sec": 0, 00:05:12.498 "r_mbytes_per_sec": 0, 00:05:12.498 "w_mbytes_per_sec": 0 00:05:12.498 }, 00:05:12.498 "claimed": false, 00:05:12.498 "zoned": false, 00:05:12.498 "supported_io_types": { 00:05:12.498 "read": true, 00:05:12.498 "write": true, 00:05:12.498 "unmap": true, 00:05:12.498 "flush": true, 00:05:12.498 "reset": true, 00:05:12.498 "nvme_admin": false, 00:05:12.498 "nvme_io": false, 00:05:12.498 "nvme_io_md": false, 00:05:12.498 "write_zeroes": true, 00:05:12.498 "zcopy": true, 00:05:12.498 "get_zone_info": false, 00:05:12.498 "zone_management": false, 00:05:12.498 "zone_append": false, 00:05:12.498 "compare": false, 00:05:12.498 "compare_and_write": false, 00:05:12.498 "abort": true, 00:05:12.498 "seek_hole": false, 00:05:12.498 "seek_data": false, 00:05:12.498 "copy": true, 00:05:12.498 "nvme_iov_md": false 00:05:12.498 }, 00:05:12.498 "memory_domains": [ 00:05:12.498 { 00:05:12.498 "dma_device_id": "system", 00:05:12.498 "dma_device_type": 1 00:05:12.498 }, 00:05:12.498 { 00:05:12.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.498 "dma_device_type": 2 00:05:12.498 } 00:05:12.498 ], 00:05:12.498 "driver_specific": {} 00:05:12.498 } 00:05:12.498 ]' 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 [2024-12-16 22:11:02.076585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.498 [2024-12-16 22:11:02.076617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.498 [2024-12-16 22:11:02.076630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x108da00 00:05:12.498 [2024-12-16 22:11:02.076638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.498 [2024-12-16 22:11:02.077647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.498 [2024-12-16 22:11:02.077669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.498 Passthru0 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.498 { 00:05:12.498 "name": "Malloc0", 00:05:12.498 "aliases": [ 00:05:12.498 "96343835-8a3b-4c55-af3c-2c19d614111e" 00:05:12.498 ], 00:05:12.498 "product_name": "Malloc disk", 00:05:12.498 "block_size": 512, 00:05:12.498 "num_blocks": 16384, 00:05:12.498 "uuid": "96343835-8a3b-4c55-af3c-2c19d614111e", 00:05:12.498 "assigned_rate_limits": { 00:05:12.498 "rw_ios_per_sec": 0, 00:05:12.498 "rw_mbytes_per_sec": 0, 00:05:12.498 "r_mbytes_per_sec": 0, 00:05:12.498 "w_mbytes_per_sec": 0 00:05:12.498 }, 00:05:12.498 "claimed": true, 00:05:12.498 "claim_type": "exclusive_write", 00:05:12.498 "zoned": false, 00:05:12.498 "supported_io_types": { 00:05:12.498 "read": true, 00:05:12.498 "write": true, 00:05:12.498 "unmap": true, 00:05:12.498 "flush": true, 00:05:12.498 "reset": true, 00:05:12.498 "nvme_admin": false, 00:05:12.498 "nvme_io": false, 00:05:12.498 "nvme_io_md": false, 00:05:12.498 "write_zeroes": true, 00:05:12.498 "zcopy": true, 00:05:12.498 "get_zone_info": false, 00:05:12.498 "zone_management": false, 00:05:12.498 "zone_append": false, 00:05:12.498 "compare": false, 00:05:12.498 "compare_and_write": false, 00:05:12.498 "abort": true, 00:05:12.498 "seek_hole": false, 00:05:12.498 "seek_data": false, 00:05:12.498 "copy": true, 00:05:12.498 "nvme_iov_md": false 00:05:12.498 }, 00:05:12.498 "memory_domains": [ 00:05:12.498 { 00:05:12.498 "dma_device_id": "system", 00:05:12.498 "dma_device_type": 1 00:05:12.498 }, 00:05:12.498 { 00:05:12.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.498 "dma_device_type": 2 00:05:12.498 } 00:05:12.498 ], 00:05:12.498 "driver_specific": {} 00:05:12.498 }, 00:05:12.498 { 00:05:12.498 "name": "Passthru0", 00:05:12.498 "aliases": [ 00:05:12.498 "2c6530df-fe86-5738-9241-848270f482fb" 00:05:12.498 ], 00:05:12.498 "product_name": "passthru", 00:05:12.498 "block_size": 512, 00:05:12.498 "num_blocks": 16384, 00:05:12.498 "uuid": "2c6530df-fe86-5738-9241-848270f482fb", 00:05:12.498 "assigned_rate_limits": { 00:05:12.498 "rw_ios_per_sec": 0, 00:05:12.498 "rw_mbytes_per_sec": 0, 00:05:12.498 "r_mbytes_per_sec": 0, 00:05:12.498 "w_mbytes_per_sec": 0 00:05:12.498 }, 00:05:12.498 "claimed": false, 00:05:12.498 "zoned": false, 00:05:12.498 "supported_io_types": { 00:05:12.498 "read": true, 00:05:12.498 "write": true, 00:05:12.498 "unmap": true, 00:05:12.498 "flush": true, 00:05:12.498 "reset": true, 00:05:12.498 "nvme_admin": false, 00:05:12.498 "nvme_io": false, 00:05:12.498 "nvme_io_md": false, 00:05:12.498 "write_zeroes": true, 00:05:12.498 "zcopy": true, 00:05:12.498 "get_zone_info": false, 00:05:12.498 "zone_management": false, 00:05:12.498 "zone_append": false, 00:05:12.498 "compare": false, 00:05:12.498 "compare_and_write": false, 00:05:12.498 "abort": true, 00:05:12.498 "seek_hole": false, 00:05:12.498 "seek_data": false, 00:05:12.498 "copy": true, 00:05:12.498 "nvme_iov_md": false 00:05:12.498 }, 00:05:12.498 "memory_domains": [ 00:05:12.498 { 00:05:12.498 "dma_device_id": "system", 00:05:12.498 "dma_device_type": 1 00:05:12.498 }, 00:05:12.498 { 00:05:12.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.498 "dma_device_type": 2 00:05:12.498 } 00:05:12.498 ], 00:05:12.498 "driver_specific": { 00:05:12.498 "passthru": { 00:05:12.498 "name": "Passthru0", 00:05:12.498 "base_bdev_name": "Malloc0" 00:05:12.498 } 00:05:12.498 } 00:05:12.498 } 00:05:12.498 ]' 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.498 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.498 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.499 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.499 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.499 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.499 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.499 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.499 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.499 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.757 22:11:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.757 00:05:12.757 real 0m0.271s 00:05:12.757 user 0m0.168s 00:05:12.757 sys 0m0.035s 00:05:12.757 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.757 22:11:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.757 ************************************ 00:05:12.757 END TEST rpc_integrity 00:05:12.757 ************************************ 00:05:12.757 22:11:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.757 22:11:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.757 22:11:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.757 22:11:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.757 ************************************ 00:05:12.757 START TEST rpc_plugins 00:05:12.757 ************************************ 00:05:12.757 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:12.757 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.757 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.758 { 00:05:12.758 "name": "Malloc1", 00:05:12.758 "aliases": [ 00:05:12.758 "6fc4fa04-698d-47bd-aa2a-0e5f785ae024" 00:05:12.758 ], 00:05:12.758 "product_name": "Malloc disk", 00:05:12.758 "block_size": 4096, 00:05:12.758 "num_blocks": 256, 00:05:12.758 "uuid": "6fc4fa04-698d-47bd-aa2a-0e5f785ae024", 00:05:12.758 "assigned_rate_limits": { 00:05:12.758 "rw_ios_per_sec": 0, 00:05:12.758 "rw_mbytes_per_sec": 0, 00:05:12.758 "r_mbytes_per_sec": 0, 00:05:12.758 "w_mbytes_per_sec": 0 00:05:12.758 }, 00:05:12.758 "claimed": false, 00:05:12.758 "zoned": false, 00:05:12.758 "supported_io_types": { 00:05:12.758 "read": true, 00:05:12.758 "write": true, 00:05:12.758 "unmap": true, 00:05:12.758 "flush": true, 00:05:12.758 "reset": true, 00:05:12.758 "nvme_admin": false, 00:05:12.758 "nvme_io": false, 00:05:12.758 "nvme_io_md": false, 00:05:12.758 "write_zeroes": true, 00:05:12.758 "zcopy": true, 00:05:12.758 "get_zone_info": false, 00:05:12.758 "zone_management": false, 00:05:12.758 "zone_append": false, 00:05:12.758 "compare": false, 00:05:12.758 "compare_and_write": false, 00:05:12.758 "abort": true, 00:05:12.758 "seek_hole": false, 00:05:12.758 "seek_data": false, 00:05:12.758 "copy": true, 00:05:12.758 "nvme_iov_md": false 00:05:12.758 }, 00:05:12.758 "memory_domains": [ 00:05:12.758 { 00:05:12.758 "dma_device_id": "system", 00:05:12.758 "dma_device_type": 1 00:05:12.758 }, 00:05:12.758 { 00:05:12.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.758 "dma_device_type": 2 00:05:12.758 } 00:05:12.758 ], 00:05:12.758 "driver_specific": {} 00:05:12.758 } 00:05:12.758 ]' 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.758 22:11:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.758 00:05:12.758 real 0m0.143s 00:05:12.758 user 0m0.090s 00:05:12.758 sys 0m0.016s 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.758 22:11:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.758 ************************************ 00:05:12.758 END TEST rpc_plugins 00:05:12.758 ************************************ 00:05:13.017 22:11:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.017 22:11:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.017 22:11:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.017 22:11:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.017 ************************************ 00:05:13.017 START TEST rpc_trace_cmd_test 00:05:13.017 ************************************ 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.017 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104265", 00:05:13.017 "tpoint_group_mask": "0x8", 00:05:13.017 "iscsi_conn": { 00:05:13.017 "mask": "0x2", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "scsi": { 00:05:13.017 "mask": "0x4", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "bdev": { 00:05:13.017 "mask": "0x8", 00:05:13.017 "tpoint_mask": "0xffffffffffffffff" 00:05:13.017 }, 00:05:13.017 "nvmf_rdma": { 00:05:13.017 "mask": "0x10", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "nvmf_tcp": { 00:05:13.017 "mask": "0x20", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "ftl": { 00:05:13.017 "mask": "0x40", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "blobfs": { 00:05:13.017 "mask": "0x80", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "dsa": { 00:05:13.017 "mask": "0x200", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "thread": { 00:05:13.017 "mask": "0x400", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "nvme_pcie": { 00:05:13.017 "mask": "0x800", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "iaa": { 00:05:13.017 "mask": "0x1000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "nvme_tcp": { 00:05:13.017 "mask": "0x2000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "bdev_nvme": { 00:05:13.017 "mask": "0x4000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "sock": { 00:05:13.017 "mask": "0x8000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "blob": { 00:05:13.017 "mask": "0x10000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "bdev_raid": { 00:05:13.017 "mask": "0x20000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 }, 00:05:13.017 "scheduler": { 00:05:13.017 "mask": "0x40000", 00:05:13.017 "tpoint_mask": "0x0" 00:05:13.017 } 00:05:13.017 }' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.017 00:05:13.017 real 0m0.209s 00:05:13.017 user 0m0.165s 00:05:13.017 sys 0m0.035s 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.017 22:11:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.017 ************************************ 00:05:13.017 END TEST rpc_trace_cmd_test 00:05:13.017 ************************************ 00:05:13.277 22:11:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.277 22:11:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.277 22:11:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.277 22:11:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.277 22:11:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.277 22:11:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 ************************************ 00:05:13.277 START TEST rpc_daemon_integrity 00:05:13.277 ************************************ 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.277 { 00:05:13.277 "name": "Malloc2", 00:05:13.277 "aliases": [ 00:05:13.277 "98a6682b-ae4a-4ece-85ec-f456fe5b69de" 00:05:13.277 ], 00:05:13.277 "product_name": "Malloc disk", 00:05:13.277 "block_size": 512, 00:05:13.277 "num_blocks": 16384, 00:05:13.277 "uuid": "98a6682b-ae4a-4ece-85ec-f456fe5b69de", 00:05:13.277 "assigned_rate_limits": { 00:05:13.277 "rw_ios_per_sec": 0, 00:05:13.277 "rw_mbytes_per_sec": 0, 00:05:13.277 "r_mbytes_per_sec": 0, 00:05:13.277 "w_mbytes_per_sec": 0 00:05:13.277 }, 00:05:13.277 "claimed": false, 00:05:13.277 "zoned": false, 00:05:13.277 "supported_io_types": { 00:05:13.277 "read": true, 00:05:13.277 "write": true, 00:05:13.277 "unmap": true, 00:05:13.277 "flush": true, 00:05:13.277 "reset": true, 00:05:13.277 "nvme_admin": false, 00:05:13.277 "nvme_io": false, 00:05:13.277 "nvme_io_md": false, 00:05:13.277 "write_zeroes": true, 00:05:13.277 "zcopy": true, 00:05:13.277 "get_zone_info": false, 00:05:13.277 "zone_management": false, 00:05:13.277 "zone_append": false, 00:05:13.277 "compare": false, 00:05:13.277 "compare_and_write": false, 00:05:13.277 "abort": true, 00:05:13.277 "seek_hole": false, 00:05:13.277 "seek_data": false, 00:05:13.277 "copy": true, 00:05:13.277 "nvme_iov_md": false 00:05:13.277 }, 00:05:13.277 "memory_domains": [ 00:05:13.277 { 00:05:13.277 "dma_device_id": "system", 00:05:13.277 "dma_device_type": 1 00:05:13.277 }, 00:05:13.277 { 00:05:13.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.277 "dma_device_type": 2 00:05:13.277 } 00:05:13.277 ], 00:05:13.277 "driver_specific": {} 00:05:13.277 } 00:05:13.277 ]' 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.277 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 [2024-12-16 22:11:02.910818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.277 [2024-12-16 22:11:02.910846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.277 [2024-12-16 22:11:02.910857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf4bac0 00:05:13.277 [2024-12-16 22:11:02.910864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.277 [2024-12-16 22:11:02.911796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.277 [2024-12-16 22:11:02.911819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.277 Passthru0 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.278 { 00:05:13.278 "name": "Malloc2", 00:05:13.278 "aliases": [ 00:05:13.278 "98a6682b-ae4a-4ece-85ec-f456fe5b69de" 00:05:13.278 ], 00:05:13.278 "product_name": "Malloc disk", 00:05:13.278 "block_size": 512, 00:05:13.278 "num_blocks": 16384, 00:05:13.278 "uuid": "98a6682b-ae4a-4ece-85ec-f456fe5b69de", 00:05:13.278 "assigned_rate_limits": { 00:05:13.278 "rw_ios_per_sec": 0, 00:05:13.278 "rw_mbytes_per_sec": 0, 00:05:13.278 "r_mbytes_per_sec": 0, 00:05:13.278 "w_mbytes_per_sec": 0 00:05:13.278 }, 00:05:13.278 "claimed": true, 00:05:13.278 "claim_type": "exclusive_write", 00:05:13.278 "zoned": false, 00:05:13.278 "supported_io_types": { 00:05:13.278 "read": true, 00:05:13.278 "write": true, 00:05:13.278 "unmap": true, 00:05:13.278 "flush": true, 00:05:13.278 "reset": true, 00:05:13.278 "nvme_admin": false, 00:05:13.278 "nvme_io": false, 00:05:13.278 "nvme_io_md": false, 00:05:13.278 "write_zeroes": true, 00:05:13.278 "zcopy": true, 00:05:13.278 "get_zone_info": false, 00:05:13.278 "zone_management": false, 00:05:13.278 "zone_append": false, 00:05:13.278 "compare": false, 00:05:13.278 "compare_and_write": false, 00:05:13.278 "abort": true, 00:05:13.278 "seek_hole": false, 00:05:13.278 "seek_data": false, 00:05:13.278 "copy": true, 00:05:13.278 "nvme_iov_md": false 00:05:13.278 }, 00:05:13.278 "memory_domains": [ 00:05:13.278 { 00:05:13.278 "dma_device_id": "system", 00:05:13.278 "dma_device_type": 1 00:05:13.278 }, 00:05:13.278 { 00:05:13.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.278 "dma_device_type": 2 00:05:13.278 } 00:05:13.278 ], 00:05:13.278 "driver_specific": {} 00:05:13.278 }, 00:05:13.278 { 00:05:13.278 "name": "Passthru0", 00:05:13.278 "aliases": [ 00:05:13.278 "24e5689a-d658-5ce1-aea8-5ccccdb04512" 00:05:13.278 ], 00:05:13.278 "product_name": "passthru", 00:05:13.278 "block_size": 512, 00:05:13.278 "num_blocks": 16384, 00:05:13.278 "uuid": "24e5689a-d658-5ce1-aea8-5ccccdb04512", 00:05:13.278 "assigned_rate_limits": { 00:05:13.278 "rw_ios_per_sec": 0, 00:05:13.278 "rw_mbytes_per_sec": 0, 00:05:13.278 "r_mbytes_per_sec": 0, 00:05:13.278 "w_mbytes_per_sec": 0 00:05:13.278 }, 00:05:13.278 "claimed": false, 00:05:13.278 "zoned": false, 00:05:13.278 "supported_io_types": { 00:05:13.278 "read": true, 00:05:13.278 "write": true, 00:05:13.278 "unmap": true, 00:05:13.278 "flush": true, 00:05:13.278 "reset": true, 00:05:13.278 "nvme_admin": false, 00:05:13.278 "nvme_io": false, 00:05:13.278 "nvme_io_md": false, 00:05:13.278 "write_zeroes": true, 00:05:13.278 "zcopy": true, 00:05:13.278 "get_zone_info": false, 00:05:13.278 "zone_management": false, 00:05:13.278 "zone_append": false, 00:05:13.278 "compare": false, 00:05:13.278 "compare_and_write": false, 00:05:13.278 "abort": true, 00:05:13.278 "seek_hole": false, 00:05:13.278 "seek_data": false, 00:05:13.278 "copy": true, 00:05:13.278 "nvme_iov_md": false 00:05:13.278 }, 00:05:13.278 "memory_domains": [ 00:05:13.278 { 00:05:13.278 "dma_device_id": "system", 00:05:13.278 "dma_device_type": 1 00:05:13.278 }, 00:05:13.278 { 00:05:13.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.278 "dma_device_type": 2 00:05:13.278 } 00:05:13.278 ], 00:05:13.278 "driver_specific": { 00:05:13.278 "passthru": { 00:05:13.278 "name": "Passthru0", 00:05:13.278 "base_bdev_name": "Malloc2" 00:05:13.278 } 00:05:13.278 } 00:05:13.278 } 00:05:13.278 ]' 00:05:13.278 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.538 22:11:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.538 00:05:13.538 real 0m0.284s 00:05:13.538 user 0m0.181s 00:05:13.538 sys 0m0.036s 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.538 22:11:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.538 ************************************ 00:05:13.538 END TEST rpc_daemon_integrity 00:05:13.538 ************************************ 00:05:13.538 22:11:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.538 22:11:03 rpc -- rpc/rpc.sh@84 -- # killprocess 104265 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 104265 ']' 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@958 -- # kill -0 104265 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104265 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104265' 00:05:13.538 killing process with pid 104265 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@973 -- # kill 104265 00:05:13.538 22:11:03 rpc -- common/autotest_common.sh@978 -- # wait 104265 00:05:13.797 00:05:13.797 real 0m2.043s 00:05:13.797 user 0m2.627s 00:05:13.797 sys 0m0.668s 00:05:13.797 22:11:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.797 22:11:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.797 ************************************ 00:05:13.797 END TEST rpc 00:05:13.798 ************************************ 00:05:13.798 22:11:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.798 22:11:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.798 22:11:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.798 22:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:14.057 ************************************ 00:05:14.057 START TEST skip_rpc 00:05:14.057 ************************************ 00:05:14.057 22:11:03 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.057 * Looking for test storage... 00:05:14.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.057 22:11:03 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.058 22:11:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.058 --rc genhtml_branch_coverage=1 00:05:14.058 --rc genhtml_function_coverage=1 00:05:14.058 --rc genhtml_legend=1 00:05:14.058 --rc geninfo_all_blocks=1 00:05:14.058 --rc geninfo_unexecuted_blocks=1 00:05:14.058 00:05:14.058 ' 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.058 --rc genhtml_branch_coverage=1 00:05:14.058 --rc genhtml_function_coverage=1 00:05:14.058 --rc genhtml_legend=1 00:05:14.058 --rc geninfo_all_blocks=1 00:05:14.058 --rc geninfo_unexecuted_blocks=1 00:05:14.058 00:05:14.058 ' 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.058 --rc genhtml_branch_coverage=1 00:05:14.058 --rc genhtml_function_coverage=1 00:05:14.058 --rc genhtml_legend=1 00:05:14.058 --rc geninfo_all_blocks=1 00:05:14.058 --rc geninfo_unexecuted_blocks=1 00:05:14.058 00:05:14.058 ' 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.058 --rc genhtml_branch_coverage=1 00:05:14.058 --rc genhtml_function_coverage=1 00:05:14.058 --rc genhtml_legend=1 00:05:14.058 --rc geninfo_all_blocks=1 00:05:14.058 --rc geninfo_unexecuted_blocks=1 00:05:14.058 00:05:14.058 ' 00:05:14.058 22:11:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.058 22:11:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.058 22:11:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.058 22:11:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.058 ************************************ 00:05:14.058 START TEST skip_rpc 00:05:14.058 ************************************ 00:05:14.058 22:11:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:14.058 22:11:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104888 00:05:14.058 22:11:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.058 22:11:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.058 22:11:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.058 [2024-12-16 22:11:03.758413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:14.058 [2024-12-16 22:11:03.758449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104888 ] 00:05:14.317 [2024-12-16 22:11:03.833575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.317 [2024-12-16 22:11:03.855958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104888 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104888 ']' 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104888 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104888 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104888' 00:05:19.591 killing process with pid 104888 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104888 00:05:19.591 22:11:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104888 00:05:19.591 00:05:19.591 real 0m5.360s 00:05:19.591 user 0m5.110s 00:05:19.591 sys 0m0.287s 00:05:19.591 22:11:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.591 22:11:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.591 ************************************ 00:05:19.591 END TEST skip_rpc 00:05:19.591 ************************************ 00:05:19.591 22:11:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.591 22:11:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.591 22:11:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.591 22:11:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.591 ************************************ 00:05:19.591 START TEST skip_rpc_with_json 00:05:19.591 ************************************ 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105810 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105810 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105810 ']' 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.591 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.591 [2024-12-16 22:11:09.193472] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:19.591 [2024-12-16 22:11:09.193516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105810 ] 00:05:19.591 [2024-12-16 22:11:09.267895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.591 [2024-12-16 22:11:09.287455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.850 [2024-12-16 22:11:09.501947] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:19.850 request: 00:05:19.850 { 00:05:19.850 "trtype": "tcp", 00:05:19.850 "method": "nvmf_get_transports", 00:05:19.850 "req_id": 1 00:05:19.850 } 00:05:19.850 Got JSON-RPC error response 00:05:19.850 response: 00:05:19.850 { 00:05:19.850 "code": -19, 00:05:19.850 "message": "No such device" 00:05:19.850 } 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.850 [2024-12-16 22:11:09.514060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.850 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.110 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.110 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.110 { 00:05:20.110 "subsystems": [ 00:05:20.110 { 00:05:20.110 "subsystem": "fsdev", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "fsdev_set_opts", 00:05:20.110 "params": { 00:05:20.110 "fsdev_io_pool_size": 65535, 00:05:20.110 "fsdev_io_cache_size": 256 00:05:20.110 } 00:05:20.110 } 00:05:20.110 ] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "vfio_user_target", 00:05:20.110 "config": null 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "keyring", 00:05:20.110 "config": [] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "iobuf", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "iobuf_set_options", 00:05:20.110 "params": { 00:05:20.110 "small_pool_count": 8192, 00:05:20.110 "large_pool_count": 1024, 00:05:20.110 "small_bufsize": 8192, 00:05:20.110 "large_bufsize": 135168, 00:05:20.110 "enable_numa": false 00:05:20.110 } 00:05:20.110 } 00:05:20.110 ] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "sock", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "sock_set_default_impl", 00:05:20.110 "params": { 00:05:20.110 "impl_name": "posix" 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "sock_impl_set_options", 00:05:20.110 "params": { 00:05:20.110 "impl_name": "ssl", 00:05:20.110 "recv_buf_size": 4096, 00:05:20.110 "send_buf_size": 4096, 00:05:20.110 "enable_recv_pipe": true, 00:05:20.110 "enable_quickack": false, 00:05:20.110 "enable_placement_id": 0, 00:05:20.110 "enable_zerocopy_send_server": true, 00:05:20.110 "enable_zerocopy_send_client": false, 00:05:20.110 "zerocopy_threshold": 0, 00:05:20.110 "tls_version": 0, 00:05:20.110 "enable_ktls": false 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "sock_impl_set_options", 00:05:20.110 "params": { 00:05:20.110 "impl_name": "posix", 00:05:20.110 "recv_buf_size": 2097152, 00:05:20.110 "send_buf_size": 2097152, 00:05:20.110 "enable_recv_pipe": true, 00:05:20.110 "enable_quickack": false, 00:05:20.110 "enable_placement_id": 0, 00:05:20.110 "enable_zerocopy_send_server": true, 00:05:20.110 "enable_zerocopy_send_client": false, 00:05:20.110 "zerocopy_threshold": 0, 00:05:20.110 "tls_version": 0, 00:05:20.110 "enable_ktls": false 00:05:20.110 } 00:05:20.110 } 00:05:20.110 ] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "vmd", 00:05:20.110 "config": [] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "accel", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "accel_set_options", 00:05:20.110 "params": { 00:05:20.110 "small_cache_size": 128, 00:05:20.110 "large_cache_size": 16, 00:05:20.110 "task_count": 2048, 00:05:20.110 "sequence_count": 2048, 00:05:20.110 "buf_count": 2048 00:05:20.110 } 00:05:20.110 } 00:05:20.110 ] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "bdev", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "bdev_set_options", 00:05:20.110 "params": { 00:05:20.110 "bdev_io_pool_size": 65535, 00:05:20.110 "bdev_io_cache_size": 256, 00:05:20.110 "bdev_auto_examine": true, 00:05:20.110 "iobuf_small_cache_size": 128, 00:05:20.110 "iobuf_large_cache_size": 16 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "bdev_raid_set_options", 00:05:20.110 "params": { 00:05:20.110 "process_window_size_kb": 1024, 00:05:20.110 "process_max_bandwidth_mb_sec": 0 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "bdev_iscsi_set_options", 00:05:20.110 "params": { 00:05:20.110 "timeout_sec": 30 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "bdev_nvme_set_options", 00:05:20.110 "params": { 00:05:20.110 "action_on_timeout": "none", 00:05:20.110 "timeout_us": 0, 00:05:20.110 "timeout_admin_us": 0, 00:05:20.110 "keep_alive_timeout_ms": 10000, 00:05:20.110 "arbitration_burst": 0, 00:05:20.110 "low_priority_weight": 0, 00:05:20.110 "medium_priority_weight": 0, 00:05:20.110 "high_priority_weight": 0, 00:05:20.110 "nvme_adminq_poll_period_us": 10000, 00:05:20.110 "nvme_ioq_poll_period_us": 0, 00:05:20.110 "io_queue_requests": 0, 00:05:20.110 "delay_cmd_submit": true, 00:05:20.110 "transport_retry_count": 4, 00:05:20.110 "bdev_retry_count": 3, 00:05:20.110 "transport_ack_timeout": 0, 00:05:20.110 "ctrlr_loss_timeout_sec": 0, 00:05:20.110 "reconnect_delay_sec": 0, 00:05:20.110 "fast_io_fail_timeout_sec": 0, 00:05:20.110 "disable_auto_failback": false, 00:05:20.110 "generate_uuids": false, 00:05:20.110 "transport_tos": 0, 00:05:20.110 "nvme_error_stat": false, 00:05:20.110 "rdma_srq_size": 0, 00:05:20.110 "io_path_stat": false, 00:05:20.110 "allow_accel_sequence": false, 00:05:20.110 "rdma_max_cq_size": 0, 00:05:20.110 "rdma_cm_event_timeout_ms": 0, 00:05:20.110 "dhchap_digests": [ 00:05:20.110 "sha256", 00:05:20.110 "sha384", 00:05:20.110 "sha512" 00:05:20.110 ], 00:05:20.110 "dhchap_dhgroups": [ 00:05:20.110 "null", 00:05:20.110 "ffdhe2048", 00:05:20.110 "ffdhe3072", 00:05:20.110 "ffdhe4096", 00:05:20.110 "ffdhe6144", 00:05:20.110 "ffdhe8192" 00:05:20.110 ], 00:05:20.110 "rdma_umr_per_io": false 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "bdev_nvme_set_hotplug", 00:05:20.110 "params": { 00:05:20.110 "period_us": 100000, 00:05:20.110 "enable": false 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "bdev_wait_for_examine" 00:05:20.110 } 00:05:20.110 ] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "scsi", 00:05:20.110 "config": null 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "scheduler", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "framework_set_scheduler", 00:05:20.110 "params": { 00:05:20.110 "name": "static" 00:05:20.110 } 00:05:20.110 } 00:05:20.110 ] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "vhost_scsi", 00:05:20.110 "config": [] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "vhost_blk", 00:05:20.110 "config": [] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "ublk", 00:05:20.110 "config": [] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "nbd", 00:05:20.110 "config": [] 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "subsystem": "nvmf", 00:05:20.110 "config": [ 00:05:20.110 { 00:05:20.110 "method": "nvmf_set_config", 00:05:20.110 "params": { 00:05:20.110 "discovery_filter": "match_any", 00:05:20.110 "admin_cmd_passthru": { 00:05:20.110 "identify_ctrlr": false 00:05:20.110 }, 00:05:20.110 "dhchap_digests": [ 00:05:20.110 "sha256", 00:05:20.110 "sha384", 00:05:20.110 "sha512" 00:05:20.110 ], 00:05:20.110 "dhchap_dhgroups": [ 00:05:20.110 "null", 00:05:20.110 "ffdhe2048", 00:05:20.110 "ffdhe3072", 00:05:20.110 "ffdhe4096", 00:05:20.110 "ffdhe6144", 00:05:20.110 "ffdhe8192" 00:05:20.110 ] 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "nvmf_set_max_subsystems", 00:05:20.110 "params": { 00:05:20.110 "max_subsystems": 1024 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "nvmf_set_crdt", 00:05:20.110 "params": { 00:05:20.110 "crdt1": 0, 00:05:20.110 "crdt2": 0, 00:05:20.110 "crdt3": 0 00:05:20.110 } 00:05:20.110 }, 00:05:20.110 { 00:05:20.110 "method": "nvmf_create_transport", 00:05:20.110 "params": { 00:05:20.110 "trtype": "TCP", 00:05:20.110 "max_queue_depth": 128, 00:05:20.110 "max_io_qpairs_per_ctrlr": 127, 00:05:20.110 "in_capsule_data_size": 4096, 00:05:20.110 "max_io_size": 131072, 00:05:20.110 "io_unit_size": 131072, 00:05:20.110 "max_aq_depth": 128, 00:05:20.110 "num_shared_buffers": 511, 00:05:20.110 "buf_cache_size": 4294967295, 00:05:20.110 "dif_insert_or_strip": false, 00:05:20.110 "zcopy": false, 00:05:20.110 "c2h_success": true, 00:05:20.110 "sock_priority": 0, 00:05:20.110 "abort_timeout_sec": 1, 00:05:20.110 "ack_timeout": 0, 00:05:20.110 "data_wr_pool_size": 0 00:05:20.110 } 00:05:20.110 } 00:05:20.111 ] 00:05:20.111 }, 00:05:20.111 { 00:05:20.111 "subsystem": "iscsi", 00:05:20.111 "config": [ 00:05:20.111 { 00:05:20.111 "method": "iscsi_set_options", 00:05:20.111 "params": { 00:05:20.111 "node_base": "iqn.2016-06.io.spdk", 00:05:20.111 "max_sessions": 128, 00:05:20.111 "max_connections_per_session": 2, 00:05:20.111 "max_queue_depth": 64, 00:05:20.111 "default_time2wait": 2, 00:05:20.111 "default_time2retain": 20, 00:05:20.111 "first_burst_length": 8192, 00:05:20.111 "immediate_data": true, 00:05:20.111 "allow_duplicated_isid": false, 00:05:20.111 "error_recovery_level": 0, 00:05:20.111 "nop_timeout": 60, 00:05:20.111 "nop_in_interval": 30, 00:05:20.111 "disable_chap": false, 00:05:20.111 "require_chap": false, 00:05:20.111 "mutual_chap": false, 00:05:20.111 "chap_group": 0, 00:05:20.111 "max_large_datain_per_connection": 64, 00:05:20.111 "max_r2t_per_connection": 4, 00:05:20.111 "pdu_pool_size": 36864, 00:05:20.111 "immediate_data_pool_size": 16384, 00:05:20.111 "data_out_pool_size": 2048 00:05:20.111 } 00:05:20.111 } 00:05:20.111 ] 00:05:20.111 } 00:05:20.111 ] 00:05:20.111 } 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105810 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105810 ']' 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105810 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105810 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105810' 00:05:20.111 killing process with pid 105810 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105810 00:05:20.111 22:11:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105810 00:05:20.370 22:11:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105904 00:05:20.370 22:11:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.370 22:11:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105904 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105904 ']' 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105904 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105904 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105904' 00:05:25.642 killing process with pid 105904 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105904 00:05:25.642 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105904 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.902 00:05:25.902 real 0m6.244s 00:05:25.902 user 0m5.957s 00:05:25.902 sys 0m0.573s 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.902 ************************************ 00:05:25.902 END TEST skip_rpc_with_json 00:05:25.902 ************************************ 00:05:25.902 22:11:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:25.902 22:11:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.902 22:11:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.902 22:11:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.902 ************************************ 00:05:25.902 START TEST skip_rpc_with_delay 00:05:25.902 ************************************ 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.902 [2024-12-16 22:11:15.508501] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.902 00:05:25.902 real 0m0.065s 00:05:25.902 user 0m0.046s 00:05:25.902 sys 0m0.019s 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.902 22:11:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:25.902 ************************************ 00:05:25.902 END TEST skip_rpc_with_delay 00:05:25.902 ************************************ 00:05:25.902 22:11:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:25.902 22:11:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:25.902 22:11:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:25.902 22:11:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.902 22:11:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.902 22:11:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.902 ************************************ 00:05:25.902 START TEST exit_on_failed_rpc_init 00:05:25.902 ************************************ 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=106951 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 106951 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 106951 ']' 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.902 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.162 [2024-12-16 22:11:15.647932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:26.162 [2024-12-16 22:11:15.647976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106951 ] 00:05:26.162 [2024-12-16 22:11:15.720995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.162 [2024-12-16 22:11:15.743054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.421 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.422 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.422 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.422 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.422 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.422 22:11:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.422 [2024-12-16 22:11:16.006618] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:26.422 [2024-12-16 22:11:16.006663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106999 ] 00:05:26.422 [2024-12-16 22:11:16.076549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.422 [2024-12-16 22:11:16.098565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.422 [2024-12-16 22:11:16.098619] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:26.422 [2024-12-16 22:11:16.098628] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:26.422 [2024-12-16 22:11:16.098634] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 106951 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 106951 ']' 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 106951 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106951 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106951' 00:05:26.681 killing process with pid 106951 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 106951 00:05:26.681 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 106951 00:05:26.941 00:05:26.941 real 0m0.884s 00:05:26.941 user 0m0.926s 00:05:26.941 sys 0m0.380s 00:05:26.941 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.941 22:11:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.941 ************************************ 00:05:26.941 END TEST exit_on_failed_rpc_init 00:05:26.941 ************************************ 00:05:26.941 22:11:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.941 00:05:26.941 real 0m13.015s 00:05:26.941 user 0m12.246s 00:05:26.941 sys 0m1.546s 00:05:26.941 22:11:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.941 22:11:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.941 ************************************ 00:05:26.941 END TEST skip_rpc 00:05:26.941 ************************************ 00:05:26.941 22:11:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.941 22:11:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.941 22:11:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.941 22:11:16 -- common/autotest_common.sh@10 -- # set +x 00:05:26.941 ************************************ 00:05:26.941 START TEST rpc_client 00:05:26.941 ************************************ 00:05:26.941 22:11:16 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.201 * Looking for test storage... 00:05:27.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.201 22:11:16 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.201 --rc genhtml_branch_coverage=1 00:05:27.201 --rc genhtml_function_coverage=1 00:05:27.201 --rc genhtml_legend=1 00:05:27.201 --rc geninfo_all_blocks=1 00:05:27.201 --rc geninfo_unexecuted_blocks=1 00:05:27.201 00:05:27.201 ' 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.201 --rc genhtml_branch_coverage=1 00:05:27.201 --rc genhtml_function_coverage=1 00:05:27.201 --rc genhtml_legend=1 00:05:27.201 --rc geninfo_all_blocks=1 00:05:27.201 --rc geninfo_unexecuted_blocks=1 00:05:27.201 00:05:27.201 ' 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.201 --rc genhtml_branch_coverage=1 00:05:27.201 --rc genhtml_function_coverage=1 00:05:27.201 --rc genhtml_legend=1 00:05:27.201 --rc geninfo_all_blocks=1 00:05:27.201 --rc geninfo_unexecuted_blocks=1 00:05:27.201 00:05:27.201 ' 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.201 --rc genhtml_branch_coverage=1 00:05:27.201 --rc genhtml_function_coverage=1 00:05:27.201 --rc genhtml_legend=1 00:05:27.201 --rc geninfo_all_blocks=1 00:05:27.201 --rc geninfo_unexecuted_blocks=1 00:05:27.201 00:05:27.201 ' 00:05:27.201 22:11:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.201 OK 00:05:27.201 22:11:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.201 00:05:27.201 real 0m0.194s 00:05:27.201 user 0m0.116s 00:05:27.201 sys 0m0.090s 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.201 22:11:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.201 ************************************ 00:05:27.201 END TEST rpc_client 00:05:27.201 ************************************ 00:05:27.201 22:11:16 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.201 22:11:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.201 22:11:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.201 22:11:16 -- common/autotest_common.sh@10 -- # set +x 00:05:27.201 ************************************ 00:05:27.201 START TEST json_config 00:05:27.201 ************************************ 00:05:27.201 22:11:16 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.462 22:11:16 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.462 22:11:16 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.462 22:11:16 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.462 22:11:16 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.462 22:11:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.462 22:11:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.462 22:11:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.462 22:11:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.462 22:11:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.462 22:11:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.462 22:11:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.462 22:11:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.462 22:11:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.462 22:11:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:27.462 22:11:16 json_config -- scripts/common.sh@345 -- # : 1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.462 22:11:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.462 22:11:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@353 -- # local d=1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.462 22:11:16 json_config -- scripts/common.sh@355 -- # echo 1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.462 22:11:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:27.462 22:11:17 json_config -- scripts/common.sh@353 -- # local d=2 00:05:27.462 22:11:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.462 22:11:17 json_config -- scripts/common.sh@355 -- # echo 2 00:05:27.462 22:11:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.462 22:11:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.462 22:11:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.462 22:11:17 json_config -- scripts/common.sh@368 -- # return 0 00:05:27.462 22:11:17 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.462 22:11:17 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.462 --rc genhtml_branch_coverage=1 00:05:27.462 --rc genhtml_function_coverage=1 00:05:27.462 --rc genhtml_legend=1 00:05:27.462 --rc geninfo_all_blocks=1 00:05:27.462 --rc geninfo_unexecuted_blocks=1 00:05:27.462 00:05:27.462 ' 00:05:27.462 22:11:17 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.462 --rc genhtml_branch_coverage=1 00:05:27.462 --rc genhtml_function_coverage=1 00:05:27.462 --rc genhtml_legend=1 00:05:27.462 --rc geninfo_all_blocks=1 00:05:27.462 --rc geninfo_unexecuted_blocks=1 00:05:27.462 00:05:27.462 ' 00:05:27.462 22:11:17 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.462 --rc genhtml_branch_coverage=1 00:05:27.462 --rc genhtml_function_coverage=1 00:05:27.462 --rc genhtml_legend=1 00:05:27.462 --rc geninfo_all_blocks=1 00:05:27.462 --rc geninfo_unexecuted_blocks=1 00:05:27.462 00:05:27.462 ' 00:05:27.462 22:11:17 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.462 --rc genhtml_branch_coverage=1 00:05:27.462 --rc genhtml_function_coverage=1 00:05:27.462 --rc genhtml_legend=1 00:05:27.462 --rc geninfo_all_blocks=1 00:05:27.462 --rc geninfo_unexecuted_blocks=1 00:05:27.462 00:05:27.462 ' 00:05:27.462 22:11:17 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.462 22:11:17 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.462 22:11:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.462 22:11:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.462 22:11:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.462 22:11:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.462 22:11:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.462 22:11:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.463 22:11:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.463 22:11:17 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.463 22:11:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@51 -- # : 0 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.463 22:11:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:27.463 INFO: JSON configuration test init 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.463 22:11:17 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:27.463 22:11:17 json_config -- json_config/common.sh@9 -- # local app=target 00:05:27.463 22:11:17 json_config -- json_config/common.sh@10 -- # shift 00:05:27.463 22:11:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.463 22:11:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.463 22:11:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.463 22:11:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.463 22:11:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.463 22:11:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107345 00:05:27.463 22:11:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.463 Waiting for target to run... 00:05:27.463 22:11:17 json_config -- json_config/common.sh@25 -- # waitforlisten 107345 /var/tmp/spdk_tgt.sock 00:05:27.463 22:11:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 107345 ']' 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.463 22:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.463 [2024-12-16 22:11:17.108049] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:27.463 [2024-12-16 22:11:17.108093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107345 ] 00:05:27.722 [2024-12-16 22:11:17.391404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.722 [2024-12-16 22:11:17.403927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.291 22:11:17 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.291 22:11:17 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:28.291 22:11:17 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.291 00:05:28.291 22:11:17 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:28.291 22:11:17 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:28.291 22:11:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.291 22:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 22:11:17 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:28.291 22:11:17 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:28.291 22:11:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.291 22:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.291 22:11:17 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.291 22:11:17 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:28.291 22:11:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:31.584 22:11:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.584 22:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:31.584 22:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@54 -- # sort 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:31.584 22:11:21 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:31.584 22:11:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.584 22:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:31.844 22:11:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.844 22:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.844 22:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.844 MallocForNvmf0 00:05:31.844 22:11:21 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:31.844 22:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.104 MallocForNvmf1 00:05:32.104 22:11:21 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.104 22:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.363 [2024-12-16 22:11:21.857171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.363 22:11:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.363 22:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.623 22:11:22 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.623 22:11:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.623 22:11:22 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.623 22:11:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.883 22:11:22 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:32.883 22:11:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.143 [2024-12-16 22:11:22.647514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.143 22:11:22 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:33.143 22:11:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.143 22:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.143 22:11:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:33.143 22:11:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.143 22:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.143 22:11:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:33.143 22:11:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.143 22:11:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.402 MallocBdevForConfigChangeCheck 00:05:33.402 22:11:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:33.402 22:11:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.402 22:11:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 22:11:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:33.402 22:11:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.662 22:11:23 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:33.662 INFO: shutting down applications... 00:05:33.662 22:11:23 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:33.662 22:11:23 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:33.662 22:11:23 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:33.662 22:11:23 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.572 Calling clear_iscsi_subsystem 00:05:35.572 Calling clear_nvmf_subsystem 00:05:35.572 Calling clear_nbd_subsystem 00:05:35.572 Calling clear_ublk_subsystem 00:05:35.572 Calling clear_vhost_blk_subsystem 00:05:35.572 Calling clear_vhost_scsi_subsystem 00:05:35.572 Calling clear_bdev_subsystem 00:05:35.572 22:11:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.572 22:11:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:35.572 22:11:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:35.573 22:11:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.573 22:11:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.573 22:11:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.573 22:11:25 json_config -- json_config/json_config.sh@352 -- # break 00:05:35.573 22:11:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:35.573 22:11:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:35.573 22:11:25 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.573 22:11:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.573 22:11:25 json_config -- json_config/common.sh@35 -- # [[ -n 107345 ]] 00:05:35.573 22:11:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107345 00:05:35.573 22:11:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.573 22:11:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.573 22:11:25 json_config -- json_config/common.sh@41 -- # kill -0 107345 00:05:35.573 22:11:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.141 22:11:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.141 22:11:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.141 22:11:25 json_config -- json_config/common.sh@41 -- # kill -0 107345 00:05:36.141 22:11:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.141 22:11:25 json_config -- json_config/common.sh@43 -- # break 00:05:36.141 22:11:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.141 22:11:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.141 SPDK target shutdown done 00:05:36.141 22:11:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:36.141 INFO: relaunching applications... 00:05:36.141 22:11:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.141 22:11:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.141 22:11:25 json_config -- json_config/common.sh@10 -- # shift 00:05:36.141 22:11:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.141 22:11:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.141 22:11:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.141 22:11:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.141 22:11:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.141 22:11:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108827 00:05:36.141 22:11:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.141 Waiting for target to run... 00:05:36.141 22:11:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.141 22:11:25 json_config -- json_config/common.sh@25 -- # waitforlisten 108827 /var/tmp/spdk_tgt.sock 00:05:36.141 22:11:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 108827 ']' 00:05:36.141 22:11:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.141 22:11:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.141 22:11:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.141 22:11:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.141 22:11:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.141 [2024-12-16 22:11:25.828690] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:36.141 [2024-12-16 22:11:25.828741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108827 ] 00:05:36.711 [2024-12-16 22:11:26.287948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.711 [2024-12-16 22:11:26.308595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.004 [2024-12-16 22:11:29.311016] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.004 [2024-12-16 22:11:29.343264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.574 22:11:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.574 22:11:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:40.574 22:11:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:40.574 00:05:40.574 22:11:30 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:40.574 22:11:30 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:40.574 INFO: Checking if target configuration is the same... 00:05:40.574 22:11:30 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.574 22:11:30 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:40.574 22:11:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.574 + '[' 2 -ne 2 ']' 00:05:40.574 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.574 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.574 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.574 +++ basename /dev/fd/62 00:05:40.574 ++ mktemp /tmp/62.XXX 00:05:40.574 + tmp_file_1=/tmp/62.IMi 00:05:40.574 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.574 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.574 + tmp_file_2=/tmp/spdk_tgt_config.json.ReL 00:05:40.574 + ret=0 00:05:40.574 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.835 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.835 + diff -u /tmp/62.IMi /tmp/spdk_tgt_config.json.ReL 00:05:40.835 + echo 'INFO: JSON config files are the same' 00:05:40.835 INFO: JSON config files are the same 00:05:40.835 + rm /tmp/62.IMi /tmp/spdk_tgt_config.json.ReL 00:05:40.835 + exit 0 00:05:40.835 22:11:30 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:40.835 22:11:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:40.835 INFO: changing configuration and checking if this can be detected... 00:05:40.835 22:11:30 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.835 22:11:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.096 22:11:30 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.096 22:11:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:41.096 22:11:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.096 + '[' 2 -ne 2 ']' 00:05:41.096 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.096 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.096 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.096 +++ basename /dev/fd/62 00:05:41.096 ++ mktemp /tmp/62.XXX 00:05:41.096 + tmp_file_1=/tmp/62.tae 00:05:41.096 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.096 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.096 + tmp_file_2=/tmp/spdk_tgt_config.json.bQS 00:05:41.096 + ret=0 00:05:41.096 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.356 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.356 + diff -u /tmp/62.tae /tmp/spdk_tgt_config.json.bQS 00:05:41.356 + ret=1 00:05:41.356 + echo '=== Start of file: /tmp/62.tae ===' 00:05:41.356 + cat /tmp/62.tae 00:05:41.356 + echo '=== End of file: /tmp/62.tae ===' 00:05:41.356 + echo '' 00:05:41.356 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bQS ===' 00:05:41.356 + cat /tmp/spdk_tgt_config.json.bQS 00:05:41.356 + echo '=== End of file: /tmp/spdk_tgt_config.json.bQS ===' 00:05:41.356 + echo '' 00:05:41.356 + rm /tmp/62.tae /tmp/spdk_tgt_config.json.bQS 00:05:41.356 + exit 1 00:05:41.356 22:11:31 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:41.356 INFO: configuration change detected. 00:05:41.356 22:11:31 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:41.356 22:11:31 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:41.356 22:11:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.356 22:11:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@324 -- # [[ -n 108827 ]] 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.616 22:11:31 json_config -- json_config/json_config.sh@330 -- # killprocess 108827 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@954 -- # '[' -z 108827 ']' 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@958 -- # kill -0 108827 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@959 -- # uname 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108827 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108827' 00:05:41.616 killing process with pid 108827 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@973 -- # kill 108827 00:05:41.616 22:11:31 json_config -- common/autotest_common.sh@978 -- # wait 108827 00:05:42.999 22:11:32 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.999 22:11:32 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:42.999 22:11:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.999 22:11:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.999 22:11:32 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:42.999 22:11:32 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:42.999 INFO: Success 00:05:42.999 00:05:42.999 real 0m15.792s 00:05:42.999 user 0m17.032s 00:05:42.999 sys 0m1.942s 00:05:42.999 22:11:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.999 22:11:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.999 ************************************ 00:05:42.999 END TEST json_config 00:05:42.999 ************************************ 00:05:42.999 22:11:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.999 22:11:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.999 22:11:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.999 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:05:43.260 ************************************ 00:05:43.260 START TEST json_config_extra_key 00:05:43.260 ************************************ 00:05:43.260 22:11:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.260 22:11:32 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.260 22:11:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.260 22:11:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.260 22:11:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.260 22:11:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.261 22:11:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.261 22:11:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.261 --rc genhtml_branch_coverage=1 00:05:43.261 --rc genhtml_function_coverage=1 00:05:43.261 --rc genhtml_legend=1 00:05:43.261 --rc geninfo_all_blocks=1 00:05:43.261 --rc geninfo_unexecuted_blocks=1 00:05:43.261 00:05:43.261 ' 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.261 --rc genhtml_branch_coverage=1 00:05:43.261 --rc genhtml_function_coverage=1 00:05:43.261 --rc genhtml_legend=1 00:05:43.261 --rc geninfo_all_blocks=1 00:05:43.261 --rc geninfo_unexecuted_blocks=1 00:05:43.261 00:05:43.261 ' 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.261 --rc genhtml_branch_coverage=1 00:05:43.261 --rc genhtml_function_coverage=1 00:05:43.261 --rc genhtml_legend=1 00:05:43.261 --rc geninfo_all_blocks=1 00:05:43.261 --rc geninfo_unexecuted_blocks=1 00:05:43.261 00:05:43.261 ' 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.261 --rc genhtml_branch_coverage=1 00:05:43.261 --rc genhtml_function_coverage=1 00:05:43.261 --rc genhtml_legend=1 00:05:43.261 --rc geninfo_all_blocks=1 00:05:43.261 --rc geninfo_unexecuted_blocks=1 00:05:43.261 00:05:43.261 ' 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.261 22:11:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.261 22:11:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.261 22:11:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.261 22:11:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.261 22:11:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.261 22:11:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.261 22:11:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.261 22:11:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.261 22:11:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.261 22:11:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.261 INFO: launching applications... 00:05:43.261 22:11:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=110125 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.261 Waiting for target to run... 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 110125 /var/tmp/spdk_tgt.sock 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 110125 ']' 00:05:43.261 22:11:32 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.261 22:11:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.262 22:11:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.262 22:11:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 [2024-12-16 22:11:32.954269] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:43.262 [2024-12-16 22:11:32.954322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110125 ] 00:05:43.832 [2024-12-16 22:11:33.403587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.832 [2024-12-16 22:11:33.425252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.091 22:11:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.091 22:11:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.091 00:05:44.091 22:11:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.091 INFO: shutting down applications... 00:05:44.091 22:11:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 110125 ]] 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 110125 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 110125 00:05:44.091 22:11:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 110125 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.662 22:11:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.662 SPDK target shutdown done 00:05:44.662 22:11:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.662 Success 00:05:44.662 00:05:44.662 real 0m1.577s 00:05:44.662 user 0m1.177s 00:05:44.662 sys 0m0.586s 00:05:44.662 22:11:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.662 22:11:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.662 ************************************ 00:05:44.662 END TEST json_config_extra_key 00:05:44.662 ************************************ 00:05:44.662 22:11:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.662 22:11:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.662 22:11:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.662 22:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.922 ************************************ 00:05:44.922 START TEST alias_rpc 00:05:44.922 ************************************ 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.923 * Looking for test storage... 00:05:44.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.923 22:11:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.923 --rc genhtml_branch_coverage=1 00:05:44.923 --rc genhtml_function_coverage=1 00:05:44.923 --rc genhtml_legend=1 00:05:44.923 --rc geninfo_all_blocks=1 00:05:44.923 --rc geninfo_unexecuted_blocks=1 00:05:44.923 00:05:44.923 ' 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.923 --rc genhtml_branch_coverage=1 00:05:44.923 --rc genhtml_function_coverage=1 00:05:44.923 --rc genhtml_legend=1 00:05:44.923 --rc geninfo_all_blocks=1 00:05:44.923 --rc geninfo_unexecuted_blocks=1 00:05:44.923 00:05:44.923 ' 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.923 --rc genhtml_branch_coverage=1 00:05:44.923 --rc genhtml_function_coverage=1 00:05:44.923 --rc genhtml_legend=1 00:05:44.923 --rc geninfo_all_blocks=1 00:05:44.923 --rc geninfo_unexecuted_blocks=1 00:05:44.923 00:05:44.923 ' 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.923 --rc genhtml_branch_coverage=1 00:05:44.923 --rc genhtml_function_coverage=1 00:05:44.923 --rc genhtml_legend=1 00:05:44.923 --rc geninfo_all_blocks=1 00:05:44.923 --rc geninfo_unexecuted_blocks=1 00:05:44.923 00:05:44.923 ' 00:05:44.923 22:11:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.923 22:11:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110566 00:05:44.923 22:11:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110566 00:05:44.923 22:11:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 110566 ']' 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.923 22:11:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.923 [2024-12-16 22:11:34.600716] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.923 [2024-12-16 22:11:34.600761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110566 ] 00:05:45.183 [2024-12-16 22:11:34.670707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.183 [2024-12-16 22:11:34.693850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.444 22:11:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.444 22:11:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.444 22:11:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:45.444 22:11:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110566 00:05:45.444 22:11:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 110566 ']' 00:05:45.444 22:11:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 110566 00:05:45.444 22:11:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.444 22:11:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.444 22:11:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110566 00:05:45.704 22:11:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.704 22:11:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.704 22:11:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110566' 00:05:45.704 killing process with pid 110566 00:05:45.704 22:11:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 110566 00:05:45.704 22:11:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 110566 00:05:45.963 00:05:45.963 real 0m1.085s 00:05:45.963 user 0m1.113s 00:05:45.963 sys 0m0.401s 00:05:45.964 22:11:35 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.964 22:11:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.964 ************************************ 00:05:45.964 END TEST alias_rpc 00:05:45.964 ************************************ 00:05:45.964 22:11:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:45.964 22:11:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.964 22:11:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.964 22:11:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.964 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:05:45.964 ************************************ 00:05:45.964 START TEST spdkcli_tcp 00:05:45.964 ************************************ 00:05:45.964 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:45.964 * Looking for test storage... 00:05:45.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:45.964 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.964 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.964 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.224 22:11:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.224 --rc genhtml_branch_coverage=1 00:05:46.224 --rc genhtml_function_coverage=1 00:05:46.224 --rc genhtml_legend=1 00:05:46.224 --rc geninfo_all_blocks=1 00:05:46.224 --rc geninfo_unexecuted_blocks=1 00:05:46.224 00:05:46.224 ' 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.224 --rc genhtml_branch_coverage=1 00:05:46.224 --rc genhtml_function_coverage=1 00:05:46.224 --rc genhtml_legend=1 00:05:46.224 --rc geninfo_all_blocks=1 00:05:46.224 --rc geninfo_unexecuted_blocks=1 00:05:46.224 00:05:46.224 ' 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.224 --rc genhtml_branch_coverage=1 00:05:46.224 --rc genhtml_function_coverage=1 00:05:46.224 --rc genhtml_legend=1 00:05:46.224 --rc geninfo_all_blocks=1 00:05:46.224 --rc geninfo_unexecuted_blocks=1 00:05:46.224 00:05:46.224 ' 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.224 --rc genhtml_branch_coverage=1 00:05:46.224 --rc genhtml_function_coverage=1 00:05:46.224 --rc genhtml_legend=1 00:05:46.224 --rc geninfo_all_blocks=1 00:05:46.224 --rc geninfo_unexecuted_blocks=1 00:05:46.224 00:05:46.224 ' 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110737 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 110737 00:05:46.224 22:11:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 110737 ']' 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.224 22:11:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.224 [2024-12-16 22:11:35.756222] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:46.224 [2024-12-16 22:11:35.756269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110737 ] 00:05:46.224 [2024-12-16 22:11:35.830228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.224 [2024-12-16 22:11:35.854321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.224 [2024-12-16 22:11:35.854321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.484 22:11:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.484 22:11:36 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:46.484 22:11:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=110859 00:05:46.484 22:11:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.484 22:11:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.745 [ 00:05:46.745 "bdev_malloc_delete", 00:05:46.745 "bdev_malloc_create", 00:05:46.745 "bdev_null_resize", 00:05:46.745 "bdev_null_delete", 00:05:46.745 "bdev_null_create", 00:05:46.745 "bdev_nvme_cuse_unregister", 00:05:46.745 "bdev_nvme_cuse_register", 00:05:46.745 "bdev_opal_new_user", 00:05:46.745 "bdev_opal_set_lock_state", 00:05:46.745 "bdev_opal_delete", 00:05:46.745 "bdev_opal_get_info", 00:05:46.745 "bdev_opal_create", 00:05:46.745 "bdev_nvme_opal_revert", 00:05:46.745 "bdev_nvme_opal_init", 00:05:46.745 "bdev_nvme_send_cmd", 00:05:46.745 "bdev_nvme_set_keys", 00:05:46.745 "bdev_nvme_get_path_iostat", 00:05:46.745 "bdev_nvme_get_mdns_discovery_info", 00:05:46.745 "bdev_nvme_stop_mdns_discovery", 00:05:46.745 "bdev_nvme_start_mdns_discovery", 00:05:46.745 "bdev_nvme_set_multipath_policy", 00:05:46.745 "bdev_nvme_set_preferred_path", 00:05:46.745 "bdev_nvme_get_io_paths", 00:05:46.745 "bdev_nvme_remove_error_injection", 00:05:46.745 "bdev_nvme_add_error_injection", 00:05:46.745 "bdev_nvme_get_discovery_info", 00:05:46.745 "bdev_nvme_stop_discovery", 00:05:46.745 "bdev_nvme_start_discovery", 00:05:46.745 "bdev_nvme_get_controller_health_info", 00:05:46.745 "bdev_nvme_disable_controller", 00:05:46.745 "bdev_nvme_enable_controller", 00:05:46.745 "bdev_nvme_reset_controller", 00:05:46.745 "bdev_nvme_get_transport_statistics", 00:05:46.745 "bdev_nvme_apply_firmware", 00:05:46.745 "bdev_nvme_detach_controller", 00:05:46.745 "bdev_nvme_get_controllers", 00:05:46.745 "bdev_nvme_attach_controller", 00:05:46.745 "bdev_nvme_set_hotplug", 00:05:46.745 "bdev_nvme_set_options", 00:05:46.745 "bdev_passthru_delete", 00:05:46.745 "bdev_passthru_create", 00:05:46.745 "bdev_lvol_set_parent_bdev", 00:05:46.745 "bdev_lvol_set_parent", 00:05:46.745 "bdev_lvol_check_shallow_copy", 00:05:46.745 "bdev_lvol_start_shallow_copy", 00:05:46.745 "bdev_lvol_grow_lvstore", 00:05:46.745 "bdev_lvol_get_lvols", 00:05:46.745 "bdev_lvol_get_lvstores", 00:05:46.745 "bdev_lvol_delete", 00:05:46.745 "bdev_lvol_set_read_only", 00:05:46.745 "bdev_lvol_resize", 00:05:46.745 "bdev_lvol_decouple_parent", 00:05:46.745 "bdev_lvol_inflate", 00:05:46.745 "bdev_lvol_rename", 00:05:46.745 "bdev_lvol_clone_bdev", 00:05:46.745 "bdev_lvol_clone", 00:05:46.745 "bdev_lvol_snapshot", 00:05:46.745 "bdev_lvol_create", 00:05:46.745 "bdev_lvol_delete_lvstore", 00:05:46.745 "bdev_lvol_rename_lvstore", 00:05:46.745 "bdev_lvol_create_lvstore", 00:05:46.745 "bdev_raid_set_options", 00:05:46.745 "bdev_raid_remove_base_bdev", 00:05:46.745 "bdev_raid_add_base_bdev", 00:05:46.745 "bdev_raid_delete", 00:05:46.745 "bdev_raid_create", 00:05:46.745 "bdev_raid_get_bdevs", 00:05:46.745 "bdev_error_inject_error", 00:05:46.745 "bdev_error_delete", 00:05:46.745 "bdev_error_create", 00:05:46.745 "bdev_split_delete", 00:05:46.745 "bdev_split_create", 00:05:46.745 "bdev_delay_delete", 00:05:46.745 "bdev_delay_create", 00:05:46.745 "bdev_delay_update_latency", 00:05:46.745 "bdev_zone_block_delete", 00:05:46.745 "bdev_zone_block_create", 00:05:46.745 "blobfs_create", 00:05:46.745 "blobfs_detect", 00:05:46.745 "blobfs_set_cache_size", 00:05:46.745 "bdev_aio_delete", 00:05:46.746 "bdev_aio_rescan", 00:05:46.746 "bdev_aio_create", 00:05:46.746 "bdev_ftl_set_property", 00:05:46.746 "bdev_ftl_get_properties", 00:05:46.746 "bdev_ftl_get_stats", 00:05:46.746 "bdev_ftl_unmap", 00:05:46.746 "bdev_ftl_unload", 00:05:46.746 "bdev_ftl_delete", 00:05:46.746 "bdev_ftl_load", 00:05:46.746 "bdev_ftl_create", 00:05:46.746 "bdev_virtio_attach_controller", 00:05:46.746 "bdev_virtio_scsi_get_devices", 00:05:46.746 "bdev_virtio_detach_controller", 00:05:46.746 "bdev_virtio_blk_set_hotplug", 00:05:46.746 "bdev_iscsi_delete", 00:05:46.746 "bdev_iscsi_create", 00:05:46.746 "bdev_iscsi_set_options", 00:05:46.746 "accel_error_inject_error", 00:05:46.746 "ioat_scan_accel_module", 00:05:46.746 "dsa_scan_accel_module", 00:05:46.746 "iaa_scan_accel_module", 00:05:46.746 "vfu_virtio_create_fs_endpoint", 00:05:46.746 "vfu_virtio_create_scsi_endpoint", 00:05:46.746 "vfu_virtio_scsi_remove_target", 00:05:46.746 "vfu_virtio_scsi_add_target", 00:05:46.746 "vfu_virtio_create_blk_endpoint", 00:05:46.746 "vfu_virtio_delete_endpoint", 00:05:46.746 "keyring_file_remove_key", 00:05:46.746 "keyring_file_add_key", 00:05:46.746 "keyring_linux_set_options", 00:05:46.746 "fsdev_aio_delete", 00:05:46.746 "fsdev_aio_create", 00:05:46.746 "iscsi_get_histogram", 00:05:46.746 "iscsi_enable_histogram", 00:05:46.746 "iscsi_set_options", 00:05:46.746 "iscsi_get_auth_groups", 00:05:46.746 "iscsi_auth_group_remove_secret", 00:05:46.746 "iscsi_auth_group_add_secret", 00:05:46.746 "iscsi_delete_auth_group", 00:05:46.746 "iscsi_create_auth_group", 00:05:46.746 "iscsi_set_discovery_auth", 00:05:46.746 "iscsi_get_options", 00:05:46.746 "iscsi_target_node_request_logout", 00:05:46.746 "iscsi_target_node_set_redirect", 00:05:46.746 "iscsi_target_node_set_auth", 00:05:46.746 "iscsi_target_node_add_lun", 00:05:46.746 "iscsi_get_stats", 00:05:46.746 "iscsi_get_connections", 00:05:46.746 "iscsi_portal_group_set_auth", 00:05:46.746 "iscsi_start_portal_group", 00:05:46.746 "iscsi_delete_portal_group", 00:05:46.746 "iscsi_create_portal_group", 00:05:46.746 "iscsi_get_portal_groups", 00:05:46.746 "iscsi_delete_target_node", 00:05:46.746 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.746 "iscsi_target_node_add_pg_ig_maps", 00:05:46.746 "iscsi_create_target_node", 00:05:46.746 "iscsi_get_target_nodes", 00:05:46.746 "iscsi_delete_initiator_group", 00:05:46.746 "iscsi_initiator_group_remove_initiators", 00:05:46.746 "iscsi_initiator_group_add_initiators", 00:05:46.746 "iscsi_create_initiator_group", 00:05:46.746 "iscsi_get_initiator_groups", 00:05:46.746 "nvmf_set_crdt", 00:05:46.746 "nvmf_set_config", 00:05:46.746 "nvmf_set_max_subsystems", 00:05:46.746 "nvmf_stop_mdns_prr", 00:05:46.746 "nvmf_publish_mdns_prr", 00:05:46.746 "nvmf_subsystem_get_listeners", 00:05:46.746 "nvmf_subsystem_get_qpairs", 00:05:46.746 "nvmf_subsystem_get_controllers", 00:05:46.746 "nvmf_get_stats", 00:05:46.746 "nvmf_get_transports", 00:05:46.746 "nvmf_create_transport", 00:05:46.746 "nvmf_get_targets", 00:05:46.746 "nvmf_delete_target", 00:05:46.746 "nvmf_create_target", 00:05:46.746 "nvmf_subsystem_allow_any_host", 00:05:46.746 "nvmf_subsystem_set_keys", 00:05:46.746 "nvmf_subsystem_remove_host", 00:05:46.746 "nvmf_subsystem_add_host", 00:05:46.746 "nvmf_ns_remove_host", 00:05:46.746 "nvmf_ns_add_host", 00:05:46.746 "nvmf_subsystem_remove_ns", 00:05:46.746 "nvmf_subsystem_set_ns_ana_group", 00:05:46.746 "nvmf_subsystem_add_ns", 00:05:46.746 "nvmf_subsystem_listener_set_ana_state", 00:05:46.746 "nvmf_discovery_get_referrals", 00:05:46.746 "nvmf_discovery_remove_referral", 00:05:46.746 "nvmf_discovery_add_referral", 00:05:46.746 "nvmf_subsystem_remove_listener", 00:05:46.746 "nvmf_subsystem_add_listener", 00:05:46.746 "nvmf_delete_subsystem", 00:05:46.746 "nvmf_create_subsystem", 00:05:46.746 "nvmf_get_subsystems", 00:05:46.746 "env_dpdk_get_mem_stats", 00:05:46.746 "nbd_get_disks", 00:05:46.746 "nbd_stop_disk", 00:05:46.746 "nbd_start_disk", 00:05:46.746 "ublk_recover_disk", 00:05:46.746 "ublk_get_disks", 00:05:46.746 "ublk_stop_disk", 00:05:46.746 "ublk_start_disk", 00:05:46.746 "ublk_destroy_target", 00:05:46.746 "ublk_create_target", 00:05:46.746 "virtio_blk_create_transport", 00:05:46.746 "virtio_blk_get_transports", 00:05:46.746 "vhost_controller_set_coalescing", 00:05:46.746 "vhost_get_controllers", 00:05:46.746 "vhost_delete_controller", 00:05:46.746 "vhost_create_blk_controller", 00:05:46.746 "vhost_scsi_controller_remove_target", 00:05:46.746 "vhost_scsi_controller_add_target", 00:05:46.746 "vhost_start_scsi_controller", 00:05:46.746 "vhost_create_scsi_controller", 00:05:46.746 "thread_set_cpumask", 00:05:46.746 "scheduler_set_options", 00:05:46.746 "framework_get_governor", 00:05:46.746 "framework_get_scheduler", 00:05:46.746 "framework_set_scheduler", 00:05:46.746 "framework_get_reactors", 00:05:46.746 "thread_get_io_channels", 00:05:46.746 "thread_get_pollers", 00:05:46.746 "thread_get_stats", 00:05:46.746 "framework_monitor_context_switch", 00:05:46.746 "spdk_kill_instance", 00:05:46.746 "log_enable_timestamps", 00:05:46.746 "log_get_flags", 00:05:46.746 "log_clear_flag", 00:05:46.746 "log_set_flag", 00:05:46.746 "log_get_level", 00:05:46.746 "log_set_level", 00:05:46.746 "log_get_print_level", 00:05:46.746 "log_set_print_level", 00:05:46.746 "framework_enable_cpumask_locks", 00:05:46.746 "framework_disable_cpumask_locks", 00:05:46.746 "framework_wait_init", 00:05:46.746 "framework_start_init", 00:05:46.746 "scsi_get_devices", 00:05:46.746 "bdev_get_histogram", 00:05:46.746 "bdev_enable_histogram", 00:05:46.746 "bdev_set_qos_limit", 00:05:46.746 "bdev_set_qd_sampling_period", 00:05:46.746 "bdev_get_bdevs", 00:05:46.746 "bdev_reset_iostat", 00:05:46.746 "bdev_get_iostat", 00:05:46.746 "bdev_examine", 00:05:46.746 "bdev_wait_for_examine", 00:05:46.746 "bdev_set_options", 00:05:46.746 "accel_get_stats", 00:05:46.746 "accel_set_options", 00:05:46.746 "accel_set_driver", 00:05:46.746 "accel_crypto_key_destroy", 00:05:46.746 "accel_crypto_keys_get", 00:05:46.746 "accel_crypto_key_create", 00:05:46.746 "accel_assign_opc", 00:05:46.746 "accel_get_module_info", 00:05:46.746 "accel_get_opc_assignments", 00:05:46.746 "vmd_rescan", 00:05:46.746 "vmd_remove_device", 00:05:46.746 "vmd_enable", 00:05:46.746 "sock_get_default_impl", 00:05:46.746 "sock_set_default_impl", 00:05:46.746 "sock_impl_set_options", 00:05:46.746 "sock_impl_get_options", 00:05:46.746 "iobuf_get_stats", 00:05:46.746 "iobuf_set_options", 00:05:46.746 "keyring_get_keys", 00:05:46.746 "vfu_tgt_set_base_path", 00:05:46.746 "framework_get_pci_devices", 00:05:46.746 "framework_get_config", 00:05:46.746 "framework_get_subsystems", 00:05:46.746 "fsdev_set_opts", 00:05:46.746 "fsdev_get_opts", 00:05:46.746 "trace_get_info", 00:05:46.746 "trace_get_tpoint_group_mask", 00:05:46.746 "trace_disable_tpoint_group", 00:05:46.746 "trace_enable_tpoint_group", 00:05:46.746 "trace_clear_tpoint_mask", 00:05:46.746 "trace_set_tpoint_mask", 00:05:46.746 "notify_get_notifications", 00:05:46.746 "notify_get_types", 00:05:46.746 "spdk_get_version", 00:05:46.746 "rpc_get_methods" 00:05:46.746 ] 00:05:46.746 22:11:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.746 22:11:36 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.746 22:11:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.746 22:11:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.746 22:11:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 110737 00:05:46.746 22:11:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 110737 ']' 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 110737 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110737 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110737' 00:05:46.747 killing process with pid 110737 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 110737 00:05:46.747 22:11:36 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 110737 00:05:47.007 00:05:47.007 real 0m1.086s 00:05:47.007 user 0m1.832s 00:05:47.007 sys 0m0.441s 00:05:47.007 22:11:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.007 22:11:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 END TEST spdkcli_tcp 00:05:47.007 ************************************ 00:05:47.007 22:11:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.007 22:11:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.007 22:11:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.007 22:11:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 START TEST dpdk_mem_utility 00:05:47.007 ************************************ 00:05:47.007 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.267 * Looking for test storage... 00:05:47.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:47.267 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.267 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.267 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.267 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.267 22:11:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.268 22:11:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.268 --rc genhtml_branch_coverage=1 00:05:47.268 --rc genhtml_function_coverage=1 00:05:47.268 --rc genhtml_legend=1 00:05:47.268 --rc geninfo_all_blocks=1 00:05:47.268 --rc geninfo_unexecuted_blocks=1 00:05:47.268 00:05:47.268 ' 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.268 --rc genhtml_branch_coverage=1 00:05:47.268 --rc genhtml_function_coverage=1 00:05:47.268 --rc genhtml_legend=1 00:05:47.268 --rc geninfo_all_blocks=1 00:05:47.268 --rc geninfo_unexecuted_blocks=1 00:05:47.268 00:05:47.268 ' 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.268 --rc genhtml_branch_coverage=1 00:05:47.268 --rc genhtml_function_coverage=1 00:05:47.268 --rc genhtml_legend=1 00:05:47.268 --rc geninfo_all_blocks=1 00:05:47.268 --rc geninfo_unexecuted_blocks=1 00:05:47.268 00:05:47.268 ' 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.268 --rc genhtml_branch_coverage=1 00:05:47.268 --rc genhtml_function_coverage=1 00:05:47.268 --rc genhtml_legend=1 00:05:47.268 --rc geninfo_all_blocks=1 00:05:47.268 --rc geninfo_unexecuted_blocks=1 00:05:47.268 00:05:47.268 ' 00:05:47.268 22:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.268 22:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=110941 00:05:47.268 22:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 110941 00:05:47.268 22:11:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 110941 ']' 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.268 22:11:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.268 [2024-12-16 22:11:36.897661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:47.268 [2024-12-16 22:11:36.897709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110941 ] 00:05:47.528 [2024-12-16 22:11:36.971809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.528 [2024-12-16 22:11:36.994510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.528 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.528 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:47.528 22:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.528 22:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.528 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.528 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.528 { 00:05:47.528 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.528 } 00:05:47.528 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.528 22:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.790 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:47.790 1 heaps totaling size 818.000000 MiB 00:05:47.790 size: 818.000000 MiB heap id: 0 00:05:47.790 end heaps---------- 00:05:47.790 9 mempools totaling size 603.782043 MiB 00:05:47.790 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.790 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.790 size: 100.555481 MiB name: bdev_io_110941 00:05:47.790 size: 50.003479 MiB name: msgpool_110941 00:05:47.790 size: 36.509338 MiB name: fsdev_io_110941 00:05:47.790 size: 21.763794 MiB name: PDU_Pool 00:05:47.790 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.790 size: 4.133484 MiB name: evtpool_110941 00:05:47.790 size: 0.026123 MiB name: Session_Pool 00:05:47.790 end mempools------- 00:05:47.790 6 memzones totaling size 4.142822 MiB 00:05:47.790 size: 1.000366 MiB name: RG_ring_0_110941 00:05:47.790 size: 1.000366 MiB name: RG_ring_1_110941 00:05:47.790 size: 1.000366 MiB name: RG_ring_4_110941 00:05:47.790 size: 1.000366 MiB name: RG_ring_5_110941 00:05:47.790 size: 0.125366 MiB name: RG_ring_2_110941 00:05:47.790 size: 0.015991 MiB name: RG_ring_3_110941 00:05:47.790 end memzones------- 00:05:47.790 22:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.790 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:47.790 list of free elements. size: 10.852478 MiB 00:05:47.790 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:47.790 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:47.790 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:47.790 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:47.790 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:47.790 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:47.790 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:47.790 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:47.790 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:47.790 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:47.790 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:47.790 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:47.790 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:47.790 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:47.790 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:47.790 list of standard malloc elements. size: 199.218628 MiB 00:05:47.790 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:47.790 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:47.790 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.790 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:47.790 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:47.790 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.790 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:47.790 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.790 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:47.790 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:47.790 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:47.790 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:47.790 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:47.790 list of memzone associated elements. size: 607.928894 MiB 00:05:47.790 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:47.790 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.790 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:47.790 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.790 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:47.790 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_110941_0 00:05:47.790 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:47.790 associated memzone info: size: 48.002930 MiB name: MP_msgpool_110941_0 00:05:47.790 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:47.790 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_110941_0 00:05:47.790 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:47.790 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.790 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:47.790 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.790 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:47.790 associated memzone info: size: 3.000122 MiB name: MP_evtpool_110941_0 00:05:47.790 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:47.790 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_110941 00:05:47.791 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.791 associated memzone info: size: 1.007996 MiB name: MP_evtpool_110941 00:05:47.791 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:47.791 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.791 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:47.791 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.791 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:47.791 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.791 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:47.791 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.791 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:47.791 associated memzone info: size: 1.000366 MiB name: RG_ring_0_110941 00:05:47.791 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:47.791 associated memzone info: size: 1.000366 MiB name: RG_ring_1_110941 00:05:47.791 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:47.791 associated memzone info: size: 1.000366 MiB name: RG_ring_4_110941 00:05:47.791 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:47.791 associated memzone info: size: 1.000366 MiB name: RG_ring_5_110941 00:05:47.791 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:47.791 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_110941 00:05:47.791 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:47.791 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_110941 00:05:47.791 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:47.791 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.791 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:47.791 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.791 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:47.791 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.791 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:47.791 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_110941 00:05:47.791 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:47.791 associated memzone info: size: 0.125366 MiB name: RG_ring_2_110941 00:05:47.791 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:47.791 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.791 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:47.791 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.791 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:47.791 associated memzone info: size: 0.015991 MiB name: RG_ring_3_110941 00:05:47.791 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:47.791 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.791 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:47.791 associated memzone info: size: 0.000183 MiB name: MP_msgpool_110941 00:05:47.791 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:47.791 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_110941 00:05:47.791 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:47.791 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_110941 00:05:47.791 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:47.791 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.791 22:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.791 22:11:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 110941 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 110941 ']' 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 110941 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110941 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110941' 00:05:47.791 killing process with pid 110941 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 110941 00:05:47.791 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 110941 00:05:48.051 00:05:48.051 real 0m0.992s 00:05:48.051 user 0m0.940s 00:05:48.051 sys 0m0.405s 00:05:48.051 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.051 22:11:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.051 ************************************ 00:05:48.051 END TEST dpdk_mem_utility 00:05:48.051 ************************************ 00:05:48.051 22:11:37 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:48.051 22:11:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.051 22:11:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.051 22:11:37 -- common/autotest_common.sh@10 -- # set +x 00:05:48.051 ************************************ 00:05:48.051 START TEST event 00:05:48.051 ************************************ 00:05:48.051 22:11:37 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:48.311 * Looking for test storage... 00:05:48.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.312 22:11:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.312 22:11:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.312 22:11:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.312 22:11:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.312 22:11:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.312 22:11:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.312 22:11:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.312 22:11:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.312 22:11:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.312 22:11:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.312 22:11:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.312 22:11:37 event -- scripts/common.sh@344 -- # case "$op" in 00:05:48.312 22:11:37 event -- scripts/common.sh@345 -- # : 1 00:05:48.312 22:11:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.312 22:11:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.312 22:11:37 event -- scripts/common.sh@365 -- # decimal 1 00:05:48.312 22:11:37 event -- scripts/common.sh@353 -- # local d=1 00:05:48.312 22:11:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.312 22:11:37 event -- scripts/common.sh@355 -- # echo 1 00:05:48.312 22:11:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.312 22:11:37 event -- scripts/common.sh@366 -- # decimal 2 00:05:48.312 22:11:37 event -- scripts/common.sh@353 -- # local d=2 00:05:48.312 22:11:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.312 22:11:37 event -- scripts/common.sh@355 -- # echo 2 00:05:48.312 22:11:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.312 22:11:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.312 22:11:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.312 22:11:37 event -- scripts/common.sh@368 -- # return 0 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.312 --rc genhtml_branch_coverage=1 00:05:48.312 --rc genhtml_function_coverage=1 00:05:48.312 --rc genhtml_legend=1 00:05:48.312 --rc geninfo_all_blocks=1 00:05:48.312 --rc geninfo_unexecuted_blocks=1 00:05:48.312 00:05:48.312 ' 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.312 --rc genhtml_branch_coverage=1 00:05:48.312 --rc genhtml_function_coverage=1 00:05:48.312 --rc genhtml_legend=1 00:05:48.312 --rc geninfo_all_blocks=1 00:05:48.312 --rc geninfo_unexecuted_blocks=1 00:05:48.312 00:05:48.312 ' 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.312 --rc genhtml_branch_coverage=1 00:05:48.312 --rc genhtml_function_coverage=1 00:05:48.312 --rc genhtml_legend=1 00:05:48.312 --rc geninfo_all_blocks=1 00:05:48.312 --rc geninfo_unexecuted_blocks=1 00:05:48.312 00:05:48.312 ' 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.312 --rc genhtml_branch_coverage=1 00:05:48.312 --rc genhtml_function_coverage=1 00:05:48.312 --rc genhtml_legend=1 00:05:48.312 --rc geninfo_all_blocks=1 00:05:48.312 --rc geninfo_unexecuted_blocks=1 00:05:48.312 00:05:48.312 ' 00:05:48.312 22:11:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:48.312 22:11:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.312 22:11:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:48.312 22:11:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.312 22:11:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.312 ************************************ 00:05:48.312 START TEST event_perf 00:05:48.312 ************************************ 00:05:48.312 22:11:37 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.312 Running I/O for 1 seconds...[2024-12-16 22:11:37.960605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:48.312 [2024-12-16 22:11:37.960679] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111225 ] 00:05:48.572 [2024-12-16 22:11:38.037421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.572 [2024-12-16 22:11:38.063245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.572 [2024-12-16 22:11:38.063352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.572 [2024-12-16 22:11:38.063465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.572 [2024-12-16 22:11:38.063465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.512 Running I/O for 1 seconds... 00:05:49.512 lcore 0: 201340 00:05:49.512 lcore 1: 201339 00:05:49.512 lcore 2: 201340 00:05:49.512 lcore 3: 201340 00:05:49.512 done. 00:05:49.512 00:05:49.512 real 0m1.154s 00:05:49.512 user 0m4.077s 00:05:49.512 sys 0m0.074s 00:05:49.512 22:11:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.512 22:11:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 ************************************ 00:05:49.512 END TEST event_perf 00:05:49.512 ************************************ 00:05:49.512 22:11:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.512 22:11:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.512 22:11:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.512 22:11:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.512 ************************************ 00:05:49.512 START TEST event_reactor 00:05:49.512 ************************************ 00:05:49.512 22:11:39 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.512 [2024-12-16 22:11:39.186374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:49.513 [2024-12-16 22:11:39.186437] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111476 ] 00:05:49.773 [2024-12-16 22:11:39.264591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.773 [2024-12-16 22:11:39.286974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.714 test_start 00:05:50.714 oneshot 00:05:50.714 tick 100 00:05:50.714 tick 100 00:05:50.714 tick 250 00:05:50.714 tick 100 00:05:50.714 tick 100 00:05:50.714 tick 100 00:05:50.714 tick 250 00:05:50.714 tick 500 00:05:50.714 tick 100 00:05:50.714 tick 100 00:05:50.714 tick 250 00:05:50.714 tick 100 00:05:50.714 tick 100 00:05:50.714 test_end 00:05:50.714 00:05:50.714 real 0m1.156s 00:05:50.714 user 0m1.080s 00:05:50.714 sys 0m0.072s 00:05:50.714 22:11:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.714 22:11:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.714 ************************************ 00:05:50.714 END TEST event_reactor 00:05:50.714 ************************************ 00:05:50.714 22:11:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.714 22:11:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:50.714 22:11:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.714 22:11:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.714 ************************************ 00:05:50.714 START TEST event_reactor_perf 00:05:50.714 ************************************ 00:05:50.714 22:11:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.714 [2024-12-16 22:11:40.409118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:50.714 [2024-12-16 22:11:40.409201] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111720 ] 00:05:50.974 [2024-12-16 22:11:40.485093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.974 [2024-12-16 22:11:40.506469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.916 test_start 00:05:51.916 test_end 00:05:51.916 Performance: 500897 events per second 00:05:51.916 00:05:51.916 real 0m1.149s 00:05:51.916 user 0m1.065s 00:05:51.916 sys 0m0.079s 00:05:51.916 22:11:41 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.916 22:11:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.916 ************************************ 00:05:51.916 END TEST event_reactor_perf 00:05:51.916 ************************************ 00:05:51.916 22:11:41 event -- event/event.sh@49 -- # uname -s 00:05:51.916 22:11:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.916 22:11:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.916 22:11:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.916 22:11:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.916 22:11:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.916 ************************************ 00:05:51.916 START TEST event_scheduler 00:05:51.916 ************************************ 00:05:52.176 22:11:41 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:52.176 * Looking for test storage... 00:05:52.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:52.176 22:11:41 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.176 22:11:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.176 22:11:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.177 22:11:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.177 --rc genhtml_branch_coverage=1 00:05:52.177 --rc genhtml_function_coverage=1 00:05:52.177 --rc genhtml_legend=1 00:05:52.177 --rc geninfo_all_blocks=1 00:05:52.177 --rc geninfo_unexecuted_blocks=1 00:05:52.177 00:05:52.177 ' 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.177 --rc genhtml_branch_coverage=1 00:05:52.177 --rc genhtml_function_coverage=1 00:05:52.177 --rc genhtml_legend=1 00:05:52.177 --rc geninfo_all_blocks=1 00:05:52.177 --rc geninfo_unexecuted_blocks=1 00:05:52.177 00:05:52.177 ' 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.177 --rc genhtml_branch_coverage=1 00:05:52.177 --rc genhtml_function_coverage=1 00:05:52.177 --rc genhtml_legend=1 00:05:52.177 --rc geninfo_all_blocks=1 00:05:52.177 --rc geninfo_unexecuted_blocks=1 00:05:52.177 00:05:52.177 ' 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.177 --rc genhtml_branch_coverage=1 00:05:52.177 --rc genhtml_function_coverage=1 00:05:52.177 --rc genhtml_legend=1 00:05:52.177 --rc geninfo_all_blocks=1 00:05:52.177 --rc geninfo_unexecuted_blocks=1 00:05:52.177 00:05:52.177 ' 00:05:52.177 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:52.177 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=112000 00:05:52.177 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.177 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:52.177 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 112000 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 112000 ']' 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.177 22:11:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.177 [2024-12-16 22:11:41.837229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:52.177 [2024-12-16 22:11:41.837275] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112000 ] 00:05:52.437 [2024-12-16 22:11:41.909458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.437 [2024-12-16 22:11:41.934699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.438 [2024-12-16 22:11:41.934807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.438 [2024-12-16 22:11:41.934890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.438 [2024-12-16 22:11:41.934890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:52.438 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 [2024-12-16 22:11:41.995552] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:52.438 [2024-12-16 22:11:41.995569] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:52.438 [2024-12-16 22:11:41.995579] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.438 [2024-12-16 22:11:41.995584] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.438 [2024-12-16 22:11:41.995589] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.438 22:11:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.438 22:11:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 [2024-12-16 22:11:42.065689] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.438 22:11:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.438 22:11:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.438 22:11:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.438 22:11:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.438 22:11:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 ************************************ 00:05:52.438 START TEST scheduler_create_thread 00:05:52.438 ************************************ 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 2 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 3 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.438 4 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.438 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 5 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 6 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 7 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 8 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 9 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 10 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.698 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.267 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.267 22:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:53.267 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.267 22:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.649 22:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.649 22:11:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:54.649 22:11:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:54.649 22:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.649 22:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.587 22:11:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.587 00:05:55.587 real 0m3.102s 00:05:55.587 user 0m0.025s 00:05:55.587 sys 0m0.005s 00:05:55.587 22:11:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.587 22:11:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.587 ************************************ 00:05:55.587 END TEST scheduler_create_thread 00:05:55.587 ************************************ 00:05:55.587 22:11:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:55.587 22:11:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 112000 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 112000 ']' 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 112000 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112000 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112000' 00:05:55.587 killing process with pid 112000 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 112000 00:05:55.587 22:11:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 112000 00:05:56.156 [2024-12-16 22:11:45.580727] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.156 00:05:56.156 real 0m4.149s 00:05:56.156 user 0m6.693s 00:05:56.156 sys 0m0.359s 00:05:56.156 22:11:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.156 22:11:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.156 ************************************ 00:05:56.156 END TEST event_scheduler 00:05:56.156 ************************************ 00:05:56.156 22:11:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:56.156 22:11:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:56.156 22:11:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.156 22:11:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.156 22:11:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.156 ************************************ 00:05:56.156 START TEST app_repeat 00:05:56.156 ************************************ 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112733 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112733' 00:05:56.156 Process app_repeat pid: 112733 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:56.156 spdk_app_start Round 0 00:05:56.156 22:11:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112733 /var/tmp/spdk-nbd.sock 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112733 ']' 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.156 22:11:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.416 [2024-12-16 22:11:45.872356] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:56.416 [2024-12-16 22:11:45.872406] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112733 ] 00:05:56.416 [2024-12-16 22:11:45.946177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.416 [2024-12-16 22:11:45.968770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.416 [2024-12-16 22:11:45.968771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.416 22:11:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.416 22:11:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.416 22:11:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.676 Malloc0 00:05:56.676 22:11:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.935 Malloc1 00:05:56.935 22:11:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.935 22:11:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.194 /dev/nbd0 00:05:57.194 22:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.194 22:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.194 22:11:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.194 22:11:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.195 1+0 records in 00:05:57.195 1+0 records out 00:05:57.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189034 s, 21.7 MB/s 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.195 22:11:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.195 22:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.195 22:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.195 22:11:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.454 /dev/nbd1 00:05:57.454 22:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.454 22:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.454 1+0 records in 00:05:57.454 1+0 records out 00:05:57.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236517 s, 17.3 MB/s 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.454 22:11:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.454 22:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.454 22:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.454 22:11:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.454 22:11:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.454 22:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.714 { 00:05:57.714 "nbd_device": "/dev/nbd0", 00:05:57.714 "bdev_name": "Malloc0" 00:05:57.714 }, 00:05:57.714 { 00:05:57.714 "nbd_device": "/dev/nbd1", 00:05:57.714 "bdev_name": "Malloc1" 00:05:57.714 } 00:05:57.714 ]' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.714 { 00:05:57.714 "nbd_device": "/dev/nbd0", 00:05:57.714 "bdev_name": "Malloc0" 00:05:57.714 }, 00:05:57.714 { 00:05:57.714 "nbd_device": "/dev/nbd1", 00:05:57.714 "bdev_name": "Malloc1" 00:05:57.714 } 00:05:57.714 ]' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.714 /dev/nbd1' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.714 /dev/nbd1' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.714 256+0 records in 00:05:57.714 256+0 records out 00:05:57.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102849 s, 102 MB/s 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.714 256+0 records in 00:05:57.714 256+0 records out 00:05:57.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139476 s, 75.2 MB/s 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.714 22:11:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.715 256+0 records in 00:05:57.715 256+0 records out 00:05:57.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147709 s, 71.0 MB/s 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.715 22:11:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.974 22:11:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.277 22:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.536 22:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.536 22:11:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.536 22:11:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.536 22:11:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.796 [2024-12-16 22:11:48.361462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.796 [2024-12-16 22:11:48.381544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.796 [2024-12-16 22:11:48.381544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.796 [2024-12-16 22:11:48.421971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.796 [2024-12-16 22:11:48.422009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.087 22:11:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.087 22:11:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:02.087 spdk_app_start Round 1 00:06:02.087 22:11:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112733 /var/tmp/spdk-nbd.sock 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112733 ']' 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.087 22:11:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.087 22:11:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.087 Malloc0 00:06:02.087 22:11:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.347 Malloc1 00:06:02.347 22:11:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.347 22:11:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.606 /dev/nbd0 00:06:02.606 22:11:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.606 22:11:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.606 1+0 records in 00:06:02.606 1+0 records out 00:06:02.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190416 s, 21.5 MB/s 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.606 22:11:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.607 22:11:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.607 22:11:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.607 22:11:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.607 /dev/nbd1 00:06:02.607 22:11:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.607 22:11:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.607 22:11:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.607 1+0 records in 00:06:02.607 1+0 records out 00:06:02.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260035 s, 15.8 MB/s 00:06:02.866 22:11:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.866 22:11:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:02.866 22:11:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.866 22:11:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.866 22:11:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.866 { 00:06:02.866 "nbd_device": "/dev/nbd0", 00:06:02.866 "bdev_name": "Malloc0" 00:06:02.866 }, 00:06:02.866 { 00:06:02.866 "nbd_device": "/dev/nbd1", 00:06:02.866 "bdev_name": "Malloc1" 00:06:02.866 } 00:06:02.866 ]' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.866 { 00:06:02.866 "nbd_device": "/dev/nbd0", 00:06:02.866 "bdev_name": "Malloc0" 00:06:02.866 }, 00:06:02.866 { 00:06:02.866 "nbd_device": "/dev/nbd1", 00:06:02.866 "bdev_name": "Malloc1" 00:06:02.866 } 00:06:02.866 ]' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.866 /dev/nbd1' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.866 /dev/nbd1' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.866 22:11:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.126 256+0 records in 00:06:03.126 256+0 records out 00:06:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106327 s, 98.6 MB/s 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.126 256+0 records in 00:06:03.126 256+0 records out 00:06:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139821 s, 75.0 MB/s 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.126 256+0 records in 00:06:03.126 256+0 records out 00:06:03.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144323 s, 72.7 MB/s 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.126 22:11:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.384 22:11:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.384 22:11:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.384 22:11:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.384 22:11:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.384 22:11:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.384 22:11:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.385 22:11:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.385 22:11:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.385 22:11:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.385 22:11:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.385 22:11:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.385 22:11:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.643 22:11:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.643 22:11:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.903 22:11:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.162 [2024-12-16 22:11:53.658904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.162 [2024-12-16 22:11:53.678958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.162 [2024-12-16 22:11:53.678958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.162 [2024-12-16 22:11:53.720381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.162 [2024-12-16 22:11:53.720421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.453 22:11:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.453 22:11:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.453 spdk_app_start Round 2 00:06:07.453 22:11:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112733 /var/tmp/spdk-nbd.sock 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112733 ']' 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.453 22:11:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:07.453 22:11:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.453 Malloc0 00:06:07.453 22:11:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.453 Malloc1 00:06:07.453 22:11:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.453 22:11:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.713 /dev/nbd0 00:06:07.713 22:11:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.713 22:11:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.713 1+0 records in 00:06:07.713 1+0 records out 00:06:07.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189122 s, 21.7 MB/s 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.713 22:11:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.713 22:11:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.713 22:11:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.713 22:11:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.972 /dev/nbd1 00:06:07.972 22:11:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.972 22:11:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.972 1+0 records in 00:06:07.972 1+0 records out 00:06:07.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199198 s, 20.6 MB/s 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.972 22:11:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:07.972 22:11:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.972 22:11:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.972 22:11:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.972 22:11:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.973 22:11:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.232 { 00:06:08.232 "nbd_device": "/dev/nbd0", 00:06:08.232 "bdev_name": "Malloc0" 00:06:08.232 }, 00:06:08.232 { 00:06:08.232 "nbd_device": "/dev/nbd1", 00:06:08.232 "bdev_name": "Malloc1" 00:06:08.232 } 00:06:08.232 ]' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.232 { 00:06:08.232 "nbd_device": "/dev/nbd0", 00:06:08.232 "bdev_name": "Malloc0" 00:06:08.232 }, 00:06:08.232 { 00:06:08.232 "nbd_device": "/dev/nbd1", 00:06:08.232 "bdev_name": "Malloc1" 00:06:08.232 } 00:06:08.232 ]' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.232 /dev/nbd1' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.232 /dev/nbd1' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.232 256+0 records in 00:06:08.232 256+0 records out 00:06:08.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106987 s, 98.0 MB/s 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.232 256+0 records in 00:06:08.232 256+0 records out 00:06:08.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144388 s, 72.6 MB/s 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.232 22:11:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.492 256+0 records in 00:06:08.492 256+0 records out 00:06:08.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149187 s, 70.3 MB/s 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.492 22:11:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.492 22:11:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.751 22:11:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.011 22:11:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.011 22:11:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.271 22:11:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.530 [2024-12-16 22:11:58.991739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.530 [2024-12-16 22:11:59.013317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.530 [2024-12-16 22:11:59.013318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.530 [2024-12-16 22:11:59.053849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.530 [2024-12-16 22:11:59.053887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.839 22:12:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112733 /var/tmp/spdk-nbd.sock 00:06:12.839 22:12:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112733 ']' 00:06:12.839 22:12:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.839 22:12:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.839 22:12:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.839 22:12:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.839 22:12:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:12.839 22:12:02 event.app_repeat -- event/event.sh@39 -- # killprocess 112733 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 112733 ']' 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 112733 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112733 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112733' 00:06:12.839 killing process with pid 112733 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@973 -- # kill 112733 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@978 -- # wait 112733 00:06:12.839 spdk_app_start is called in Round 0. 00:06:12.839 Shutdown signal received, stop current app iteration 00:06:12.839 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:12.839 spdk_app_start is called in Round 1. 00:06:12.839 Shutdown signal received, stop current app iteration 00:06:12.839 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:12.839 spdk_app_start is called in Round 2. 00:06:12.839 Shutdown signal received, stop current app iteration 00:06:12.839 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:12.839 spdk_app_start is called in Round 3. 00:06:12.839 Shutdown signal received, stop current app iteration 00:06:12.839 22:12:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.839 22:12:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.839 00:06:12.839 real 0m16.406s 00:06:12.839 user 0m36.205s 00:06:12.839 sys 0m2.531s 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.839 22:12:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.839 ************************************ 00:06:12.839 END TEST app_repeat 00:06:12.839 ************************************ 00:06:12.839 22:12:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.839 22:12:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.839 22:12:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.839 22:12:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.839 22:12:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.839 ************************************ 00:06:12.839 START TEST cpu_locks 00:06:12.839 ************************************ 00:06:12.839 22:12:02 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.839 * Looking for test storage... 00:06:12.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:12.839 22:12:02 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.839 22:12:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.839 22:12:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.839 22:12:02 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.839 22:12:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.840 22:12:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.840 --rc genhtml_branch_coverage=1 00:06:12.840 --rc genhtml_function_coverage=1 00:06:12.840 --rc genhtml_legend=1 00:06:12.840 --rc geninfo_all_blocks=1 00:06:12.840 --rc geninfo_unexecuted_blocks=1 00:06:12.840 00:06:12.840 ' 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.840 --rc genhtml_branch_coverage=1 00:06:12.840 --rc genhtml_function_coverage=1 00:06:12.840 --rc genhtml_legend=1 00:06:12.840 --rc geninfo_all_blocks=1 00:06:12.840 --rc geninfo_unexecuted_blocks=1 00:06:12.840 00:06:12.840 ' 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.840 --rc genhtml_branch_coverage=1 00:06:12.840 --rc genhtml_function_coverage=1 00:06:12.840 --rc genhtml_legend=1 00:06:12.840 --rc geninfo_all_blocks=1 00:06:12.840 --rc geninfo_unexecuted_blocks=1 00:06:12.840 00:06:12.840 ' 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.840 --rc genhtml_branch_coverage=1 00:06:12.840 --rc genhtml_function_coverage=1 00:06:12.840 --rc genhtml_legend=1 00:06:12.840 --rc geninfo_all_blocks=1 00:06:12.840 --rc geninfo_unexecuted_blocks=1 00:06:12.840 00:06:12.840 ' 00:06:12.840 22:12:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.840 22:12:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.840 22:12:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.840 22:12:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.840 22:12:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.840 ************************************ 00:06:12.840 START TEST default_locks 00:06:12.840 ************************************ 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=115792 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 115792 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115792 ']' 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.840 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.099 [2024-12-16 22:12:02.566020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:13.099 [2024-12-16 22:12:02.566062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115792 ] 00:06:13.099 [2024-12-16 22:12:02.639562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.099 [2024-12-16 22:12:02.661806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.358 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.359 22:12:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:13.359 22:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 115792 00:06:13.359 22:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 115792 00:06:13.359 22:12:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.618 lslocks: write error 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 115792 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 115792 ']' 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 115792 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115792 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115792' 00:06:13.618 killing process with pid 115792 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 115792 00:06:13.618 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 115792 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 115792 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 115792 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 115792 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 115792 ']' 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (115792) - No such process 00:06:14.188 ERROR: process (pid: 115792) is no longer running 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.188 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.189 00:06:14.189 real 0m1.094s 00:06:14.189 user 0m1.054s 00:06:14.189 sys 0m0.533s 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.189 22:12:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.189 ************************************ 00:06:14.189 END TEST default_locks 00:06:14.189 ************************************ 00:06:14.189 22:12:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.189 22:12:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.189 22:12:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.189 22:12:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.189 ************************************ 00:06:14.189 START TEST default_locks_via_rpc 00:06:14.189 ************************************ 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116043 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 116043 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 116043 ']' 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.189 22:12:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.189 [2024-12-16 22:12:03.731086] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:14.189 [2024-12-16 22:12:03.731126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116043 ] 00:06:14.189 [2024-12-16 22:12:03.803790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.189 [2024-12-16 22:12:03.826457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 116043 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 116043 00:06:14.449 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.708 22:12:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 116043 00:06:14.708 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 116043 ']' 00:06:14.708 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 116043 00:06:14.708 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.708 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.708 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116043 00:06:14.968 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.968 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.968 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116043' 00:06:14.968 killing process with pid 116043 00:06:14.968 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 116043 00:06:14.968 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 116043 00:06:15.228 00:06:15.228 real 0m1.023s 00:06:15.228 user 0m0.973s 00:06:15.229 sys 0m0.486s 00:06:15.229 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.229 22:12:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.229 ************************************ 00:06:15.229 END TEST default_locks_via_rpc 00:06:15.229 ************************************ 00:06:15.229 22:12:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.229 22:12:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.229 22:12:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.229 22:12:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.229 ************************************ 00:06:15.229 START TEST non_locking_app_on_locked_coremask 00:06:15.229 ************************************ 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116291 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 116291 /var/tmp/spdk.sock 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116291 ']' 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.229 22:12:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.229 [2024-12-16 22:12:04.812443] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:15.229 [2024-12-16 22:12:04.812483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116291 ] 00:06:15.229 [2024-12-16 22:12:04.881954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.229 [2024-12-16 22:12:04.904780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116300 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 116300 /var/tmp/spdk2.sock 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116300 ']' 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.498 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.498 [2024-12-16 22:12:05.146171] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:15.498 [2024-12-16 22:12:05.146221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116300 ] 00:06:15.760 [2024-12-16 22:12:05.229995] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.760 [2024-12-16 22:12:05.230017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.760 [2024-12-16 22:12:05.276093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.329 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.329 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.329 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 116291 00:06:16.329 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116291 00:06:16.329 22:12:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.899 lslocks: write error 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 116291 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116291 ']' 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116291 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116291 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116291' 00:06:16.899 killing process with pid 116291 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116291 00:06:16.899 22:12:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116291 00:06:17.470 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 116300 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116300 ']' 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116300 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116300 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116300' 00:06:17.730 killing process with pid 116300 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116300 00:06:17.730 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116300 00:06:17.989 00:06:17.989 real 0m2.750s 00:06:17.989 user 0m2.919s 00:06:17.989 sys 0m0.920s 00:06:17.989 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.989 22:12:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.989 ************************************ 00:06:17.989 END TEST non_locking_app_on_locked_coremask 00:06:17.989 ************************************ 00:06:17.989 22:12:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:17.989 22:12:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.989 22:12:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.989 22:12:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.989 ************************************ 00:06:17.989 START TEST locking_app_on_unlocked_coremask 00:06:17.989 ************************************ 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=117160 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 117160 /var/tmp/spdk.sock 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117160 ']' 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.989 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.990 [2024-12-16 22:12:07.632586] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:17.990 [2024-12-16 22:12:07.632625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117160 ] 00:06:18.249 [2024-12-16 22:12:07.706278] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.249 [2024-12-16 22:12:07.706304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.249 [2024-12-16 22:12:07.729237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117179 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 117179 /var/tmp/spdk2.sock 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117179 ']' 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.249 22:12:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.509 [2024-12-16 22:12:07.973253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:18.509 [2024-12-16 22:12:07.973299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117179 ] 00:06:18.509 [2024-12-16 22:12:08.059913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.509 [2024-12-16 22:12:08.105961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.769 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.769 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.769 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 117179 00:06:18.769 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117179 00:06:18.769 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.338 lslocks: write error 00:06:19.338 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 117160 00:06:19.338 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117160 ']' 00:06:19.338 22:12:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 117160 00:06:19.338 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.338 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.338 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117160 00:06:19.598 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.598 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.598 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117160' 00:06:19.598 killing process with pid 117160 00:06:19.598 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 117160 00:06:19.598 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 117160 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 117179 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117179 ']' 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 117179 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117179 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117179' 00:06:20.168 killing process with pid 117179 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 117179 00:06:20.168 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 117179 00:06:20.429 00:06:20.429 real 0m2.391s 00:06:20.429 user 0m2.405s 00:06:20.429 sys 0m0.926s 00:06:20.429 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.429 22:12:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 ************************************ 00:06:20.429 END TEST locking_app_on_unlocked_coremask 00:06:20.429 ************************************ 00:06:20.429 22:12:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:20.429 22:12:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.429 22:12:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.429 22:12:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 ************************************ 00:06:20.429 START TEST locking_app_on_locked_coremask 00:06:20.429 ************************************ 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117653 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 117653 /var/tmp/spdk.sock 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117653 ']' 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.429 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.429 [2024-12-16 22:12:10.093691] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:20.429 [2024-12-16 22:12:10.093733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117653 ] 00:06:20.689 [2024-12-16 22:12:10.168336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.689 [2024-12-16 22:12:10.190451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117660 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117660 /var/tmp/spdk2.sock 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117660 /var/tmp/spdk2.sock 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117660 /var/tmp/spdk2.sock 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117660 ']' 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.949 22:12:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.949 [2024-12-16 22:12:10.453800] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:20.949 [2024-12-16 22:12:10.453848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117660 ] 00:06:20.949 [2024-12-16 22:12:10.544776] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117653 has claimed it. 00:06:20.949 [2024-12-16 22:12:10.544821] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117660) - No such process 00:06:21.518 ERROR: process (pid: 117660) is no longer running 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 117653 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117653 00:06:21.518 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.088 lslocks: write error 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 117653 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117653 ']' 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 117653 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117653 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117653' 00:06:22.088 killing process with pid 117653 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 117653 00:06:22.088 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 117653 00:06:22.348 00:06:22.348 real 0m1.810s 00:06:22.348 user 0m1.957s 00:06:22.348 sys 0m0.631s 00:06:22.348 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.348 22:12:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.348 ************************************ 00:06:22.348 END TEST locking_app_on_locked_coremask 00:06:22.348 ************************************ 00:06:22.348 22:12:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:22.348 22:12:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.348 22:12:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.348 22:12:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.348 ************************************ 00:06:22.348 START TEST locking_overlapped_coremask 00:06:22.348 ************************************ 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117916 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 117916 /var/tmp/spdk.sock 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117916 ']' 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.348 22:12:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.348 [2024-12-16 22:12:11.975410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:22.348 [2024-12-16 22:12:11.975454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117916 ] 00:06:22.348 [2024-12-16 22:12:12.032703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.608 [2024-12-16 22:12:12.058684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.608 [2024-12-16 22:12:12.058718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.608 [2024-12-16 22:12:12.058717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=118065 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 118065 /var/tmp/spdk2.sock 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 118065 /var/tmp/spdk2.sock 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 118065 /var/tmp/spdk2.sock 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 118065 ']' 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.608 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.609 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.609 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.868 [2024-12-16 22:12:12.316787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:22.868 [2024-12-16 22:12:12.316838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118065 ] 00:06:22.868 [2024-12-16 22:12:12.410144] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117916 has claimed it. 00:06:22.868 [2024-12-16 22:12:12.410181] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (118065) - No such process 00:06:23.438 ERROR: process (pid: 118065) is no longer running 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 117916 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 117916 ']' 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 117916 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.438 22:12:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117916 00:06:23.438 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.438 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.438 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117916' 00:06:23.438 killing process with pid 117916 00:06:23.438 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 117916 00:06:23.438 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 117916 00:06:23.698 00:06:23.698 real 0m1.387s 00:06:23.698 user 0m3.921s 00:06:23.698 sys 0m0.370s 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.698 ************************************ 00:06:23.698 END TEST locking_overlapped_coremask 00:06:23.698 ************************************ 00:06:23.698 22:12:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.698 22:12:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.698 22:12:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.698 22:12:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.698 ************************************ 00:06:23.698 START TEST locking_overlapped_coremask_via_rpc 00:06:23.698 ************************************ 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=118181 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 118181 /var/tmp/spdk.sock 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118181 ']' 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.698 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.958 [2024-12-16 22:12:13.430103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:23.958 [2024-12-16 22:12:13.430144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118181 ] 00:06:23.958 [2024-12-16 22:12:13.502712] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.958 [2024-12-16 22:12:13.502740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.958 [2024-12-16 22:12:13.527787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.958 [2024-12-16 22:12:13.527894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.958 [2024-12-16 22:12:13.527894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=118387 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 118387 /var/tmp/spdk2.sock 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118387 ']' 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.218 22:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.218 [2024-12-16 22:12:13.773757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:24.218 [2024-12-16 22:12:13.773802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118387 ] 00:06:24.218 [2024-12-16 22:12:13.863178] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.218 [2024-12-16 22:12:13.863206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.218 [2024-12-16 22:12:13.911953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.218 [2024-12-16 22:12:13.912068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.218 [2024-12-16 22:12:13.912070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.156 [2024-12-16 22:12:14.622262] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118181 has claimed it. 00:06:25.156 request: 00:06:25.156 { 00:06:25.156 "method": "framework_enable_cpumask_locks", 00:06:25.156 "req_id": 1 00:06:25.156 } 00:06:25.156 Got JSON-RPC error response 00:06:25.156 response: 00:06:25.156 { 00:06:25.156 "code": -32603, 00:06:25.156 "message": "Failed to claim CPU core: 2" 00:06:25.156 } 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 118181 /var/tmp/spdk.sock 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118181 ']' 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 118387 /var/tmp/spdk2.sock 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118387 ']' 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.156 22:12:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.415 00:06:25.415 real 0m1.688s 00:06:25.415 user 0m0.843s 00:06:25.415 sys 0m0.129s 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.415 22:12:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.415 ************************************ 00:06:25.415 END TEST locking_overlapped_coremask_via_rpc 00:06:25.415 ************************************ 00:06:25.415 22:12:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.415 22:12:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 118181 ]] 00:06:25.415 22:12:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 118181 00:06:25.415 22:12:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118181 ']' 00:06:25.415 22:12:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118181 00:06:25.415 22:12:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.415 22:12:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.415 22:12:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118181 00:06:25.674 22:12:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.674 22:12:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.674 22:12:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118181' 00:06:25.674 killing process with pid 118181 00:06:25.674 22:12:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 118181 00:06:25.674 22:12:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 118181 00:06:25.933 22:12:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 118387 ]] 00:06:25.933 22:12:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 118387 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118387 ']' 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118387 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118387 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118387' 00:06:25.933 killing process with pid 118387 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 118387 00:06:25.933 22:12:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 118387 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 118181 ]] 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 118181 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118181 ']' 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118181 00:06:26.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118181) - No such process 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 118181 is not found' 00:06:26.193 Process with pid 118181 is not found 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 118387 ]] 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 118387 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118387 ']' 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118387 00:06:26.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118387) - No such process 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 118387 is not found' 00:06:26.193 Process with pid 118387 is not found 00:06:26.193 22:12:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.193 00:06:26.193 real 0m13.500s 00:06:26.193 user 0m23.837s 00:06:26.193 sys 0m4.940s 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.193 22:12:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 ************************************ 00:06:26.193 END TEST cpu_locks 00:06:26.193 ************************************ 00:06:26.193 00:06:26.193 real 0m38.116s 00:06:26.193 user 1m13.227s 00:06:26.193 sys 0m8.424s 00:06:26.193 22:12:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.193 22:12:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 ************************************ 00:06:26.193 END TEST event 00:06:26.193 ************************************ 00:06:26.193 22:12:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.193 22:12:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.193 22:12:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.193 22:12:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.453 ************************************ 00:06:26.453 START TEST thread 00:06:26.453 ************************************ 00:06:26.453 22:12:15 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.453 * Looking for test storage... 00:06:26.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.453 22:12:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.453 22:12:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.453 22:12:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.453 22:12:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.453 22:12:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.453 22:12:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.453 22:12:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.453 22:12:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.453 22:12:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.453 22:12:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.453 22:12:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.453 22:12:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.453 22:12:16 thread -- scripts/common.sh@345 -- # : 1 00:06:26.453 22:12:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.453 22:12:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.453 22:12:16 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.453 22:12:16 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.453 22:12:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.453 22:12:16 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.453 22:12:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.453 22:12:16 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.453 22:12:16 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.453 22:12:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.453 22:12:16 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.453 22:12:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.453 22:12:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.453 22:12:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.453 22:12:16 thread -- scripts/common.sh@368 -- # return 0 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.453 --rc genhtml_branch_coverage=1 00:06:26.453 --rc genhtml_function_coverage=1 00:06:26.453 --rc genhtml_legend=1 00:06:26.453 --rc geninfo_all_blocks=1 00:06:26.453 --rc geninfo_unexecuted_blocks=1 00:06:26.453 00:06:26.453 ' 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.453 --rc genhtml_branch_coverage=1 00:06:26.453 --rc genhtml_function_coverage=1 00:06:26.453 --rc genhtml_legend=1 00:06:26.453 --rc geninfo_all_blocks=1 00:06:26.453 --rc geninfo_unexecuted_blocks=1 00:06:26.453 00:06:26.453 ' 00:06:26.453 22:12:16 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.453 --rc genhtml_branch_coverage=1 00:06:26.453 --rc genhtml_function_coverage=1 00:06:26.453 --rc genhtml_legend=1 00:06:26.454 --rc geninfo_all_blocks=1 00:06:26.454 --rc geninfo_unexecuted_blocks=1 00:06:26.454 00:06:26.454 ' 00:06:26.454 22:12:16 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.454 --rc genhtml_branch_coverage=1 00:06:26.454 --rc genhtml_function_coverage=1 00:06:26.454 --rc genhtml_legend=1 00:06:26.454 --rc geninfo_all_blocks=1 00:06:26.454 --rc geninfo_unexecuted_blocks=1 00:06:26.454 00:06:26.454 ' 00:06:26.454 22:12:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.454 22:12:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.454 22:12:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.454 22:12:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.454 ************************************ 00:06:26.454 START TEST thread_poller_perf 00:06:26.454 ************************************ 00:06:26.454 22:12:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.454 [2024-12-16 22:12:16.141259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:26.454 [2024-12-16 22:12:16.141327] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118745 ] 00:06:26.714 [2024-12-16 22:12:16.220163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.714 [2024-12-16 22:12:16.242267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.714 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.653 [2024-12-16T21:12:17.354Z] ====================================== 00:06:27.653 [2024-12-16T21:12:17.354Z] busy:2108536730 (cyc) 00:06:27.653 [2024-12-16T21:12:17.354Z] total_run_count: 426000 00:06:27.653 [2024-12-16T21:12:17.354Z] tsc_hz: 2100000000 (cyc) 00:06:27.653 [2024-12-16T21:12:17.354Z] ====================================== 00:06:27.653 [2024-12-16T21:12:17.354Z] poller_cost: 4949 (cyc), 2356 (nsec) 00:06:27.653 00:06:27.653 real 0m1.159s 00:06:27.653 user 0m1.077s 00:06:27.653 sys 0m0.077s 00:06:27.653 22:12:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.653 22:12:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.653 ************************************ 00:06:27.653 END TEST thread_poller_perf 00:06:27.653 ************************************ 00:06:27.653 22:12:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.653 22:12:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:27.653 22:12:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.653 22:12:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.653 ************************************ 00:06:27.653 START TEST thread_poller_perf 00:06:27.653 ************************************ 00:06:27.653 22:12:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.913 [2024-12-16 22:12:17.373989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:27.913 [2024-12-16 22:12:17.374067] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118986 ] 00:06:27.913 [2024-12-16 22:12:17.453019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.913 [2024-12-16 22:12:17.475120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.913 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.859 [2024-12-16T21:12:18.560Z] ====================================== 00:06:28.859 [2024-12-16T21:12:18.560Z] busy:2101295410 (cyc) 00:06:28.859 [2024-12-16T21:12:18.560Z] total_run_count: 5164000 00:06:28.859 [2024-12-16T21:12:18.560Z] tsc_hz: 2100000000 (cyc) 00:06:28.859 [2024-12-16T21:12:18.560Z] ====================================== 00:06:28.859 [2024-12-16T21:12:18.560Z] poller_cost: 406 (cyc), 193 (nsec) 00:06:28.859 00:06:28.859 real 0m1.155s 00:06:28.859 user 0m1.073s 00:06:28.859 sys 0m0.077s 00:06:28.859 22:12:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.859 22:12:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.859 ************************************ 00:06:28.859 END TEST thread_poller_perf 00:06:28.859 ************************************ 00:06:28.859 22:12:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:28.859 00:06:28.859 real 0m2.628s 00:06:28.859 user 0m2.316s 00:06:28.859 sys 0m0.326s 00:06:28.859 22:12:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.859 22:12:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.859 ************************************ 00:06:28.859 END TEST thread 00:06:28.859 ************************************ 00:06:29.120 22:12:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:29.120 22:12:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.120 22:12:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.120 22:12:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.120 22:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:29.120 ************************************ 00:06:29.120 START TEST app_cmdline 00:06:29.120 ************************************ 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:29.120 * Looking for test storage... 00:06:29.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.120 22:12:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.120 --rc genhtml_branch_coverage=1 00:06:29.120 --rc genhtml_function_coverage=1 00:06:29.120 --rc genhtml_legend=1 00:06:29.120 --rc geninfo_all_blocks=1 00:06:29.120 --rc geninfo_unexecuted_blocks=1 00:06:29.120 00:06:29.120 ' 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.120 --rc genhtml_branch_coverage=1 00:06:29.120 --rc genhtml_function_coverage=1 00:06:29.120 --rc genhtml_legend=1 00:06:29.120 --rc geninfo_all_blocks=1 00:06:29.120 --rc geninfo_unexecuted_blocks=1 00:06:29.120 00:06:29.120 ' 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.120 --rc genhtml_branch_coverage=1 00:06:29.120 --rc genhtml_function_coverage=1 00:06:29.120 --rc genhtml_legend=1 00:06:29.120 --rc geninfo_all_blocks=1 00:06:29.120 --rc geninfo_unexecuted_blocks=1 00:06:29.120 00:06:29.120 ' 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.120 --rc genhtml_branch_coverage=1 00:06:29.120 --rc genhtml_function_coverage=1 00:06:29.120 --rc genhtml_legend=1 00:06:29.120 --rc geninfo_all_blocks=1 00:06:29.120 --rc geninfo_unexecuted_blocks=1 00:06:29.120 00:06:29.120 ' 00:06:29.120 22:12:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.120 22:12:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=119274 00:06:29.120 22:12:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 119274 00:06:29.120 22:12:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 119274 ']' 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.120 22:12:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.380 [2024-12-16 22:12:18.833801] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:29.380 [2024-12-16 22:12:18.833846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119274 ] 00:06:29.380 [2024-12-16 22:12:18.909224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.380 [2024-12-16 22:12:18.932316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.640 22:12:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.640 22:12:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:29.640 { 00:06:29.640 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:29.640 "fields": { 00:06:29.640 "major": 25, 00:06:29.640 "minor": 1, 00:06:29.640 "patch": 0, 00:06:29.640 "suffix": "-pre", 00:06:29.640 "commit": "e01cb43b8" 00:06:29.640 } 00:06:29.640 } 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:29.640 22:12:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:29.640 22:12:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.640 22:12:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.640 22:12:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.900 22:12:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:29.900 22:12:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:29.900 22:12:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.900 request: 00:06:29.900 { 00:06:29.900 "method": "env_dpdk_get_mem_stats", 00:06:29.900 "req_id": 1 00:06:29.900 } 00:06:29.900 Got JSON-RPC error response 00:06:29.900 response: 00:06:29.900 { 00:06:29.900 "code": -32601, 00:06:29.900 "message": "Method not found" 00:06:29.900 } 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.900 22:12:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 119274 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 119274 ']' 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 119274 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.900 22:12:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119274 00:06:30.160 22:12:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.160 22:12:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.160 22:12:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119274' 00:06:30.160 killing process with pid 119274 00:06:30.160 22:12:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 119274 00:06:30.160 22:12:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 119274 00:06:30.420 00:06:30.420 real 0m1.296s 00:06:30.420 user 0m1.519s 00:06:30.420 sys 0m0.441s 00:06:30.420 22:12:19 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.420 22:12:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.420 ************************************ 00:06:30.420 END TEST app_cmdline 00:06:30.420 ************************************ 00:06:30.420 22:12:19 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:30.420 22:12:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.420 22:12:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.420 22:12:19 -- common/autotest_common.sh@10 -- # set +x 00:06:30.420 ************************************ 00:06:30.420 START TEST version 00:06:30.420 ************************************ 00:06:30.420 22:12:19 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:30.420 * Looking for test storage... 00:06:30.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:30.420 22:12:20 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.420 22:12:20 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.420 22:12:20 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.680 22:12:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.680 22:12:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.680 22:12:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.680 22:12:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.680 22:12:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.680 22:12:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.680 22:12:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.680 22:12:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.680 22:12:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.680 22:12:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.680 22:12:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.680 22:12:20 version -- scripts/common.sh@344 -- # case "$op" in 00:06:30.680 22:12:20 version -- scripts/common.sh@345 -- # : 1 00:06:30.680 22:12:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.680 22:12:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.680 22:12:20 version -- scripts/common.sh@365 -- # decimal 1 00:06:30.680 22:12:20 version -- scripts/common.sh@353 -- # local d=1 00:06:30.680 22:12:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.680 22:12:20 version -- scripts/common.sh@355 -- # echo 1 00:06:30.680 22:12:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.680 22:12:20 version -- scripts/common.sh@366 -- # decimal 2 00:06:30.680 22:12:20 version -- scripts/common.sh@353 -- # local d=2 00:06:30.680 22:12:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.680 22:12:20 version -- scripts/common.sh@355 -- # echo 2 00:06:30.680 22:12:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.680 22:12:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.680 22:12:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.680 22:12:20 version -- scripts/common.sh@368 -- # return 0 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.680 --rc genhtml_branch_coverage=1 00:06:30.680 --rc genhtml_function_coverage=1 00:06:30.680 --rc genhtml_legend=1 00:06:30.680 --rc geninfo_all_blocks=1 00:06:30.680 --rc geninfo_unexecuted_blocks=1 00:06:30.680 00:06:30.680 ' 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.680 --rc genhtml_branch_coverage=1 00:06:30.680 --rc genhtml_function_coverage=1 00:06:30.680 --rc genhtml_legend=1 00:06:30.680 --rc geninfo_all_blocks=1 00:06:30.680 --rc geninfo_unexecuted_blocks=1 00:06:30.680 00:06:30.680 ' 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.680 --rc genhtml_branch_coverage=1 00:06:30.680 --rc genhtml_function_coverage=1 00:06:30.680 --rc genhtml_legend=1 00:06:30.680 --rc geninfo_all_blocks=1 00:06:30.680 --rc geninfo_unexecuted_blocks=1 00:06:30.680 00:06:30.680 ' 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.680 --rc genhtml_branch_coverage=1 00:06:30.680 --rc genhtml_function_coverage=1 00:06:30.680 --rc genhtml_legend=1 00:06:30.680 --rc geninfo_all_blocks=1 00:06:30.680 --rc geninfo_unexecuted_blocks=1 00:06:30.680 00:06:30.680 ' 00:06:30.680 22:12:20 version -- app/version.sh@17 -- # get_header_version major 00:06:30.680 22:12:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # cut -f2 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.680 22:12:20 version -- app/version.sh@17 -- # major=25 00:06:30.680 22:12:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:30.680 22:12:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # cut -f2 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.680 22:12:20 version -- app/version.sh@18 -- # minor=1 00:06:30.680 22:12:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:30.680 22:12:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # cut -f2 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.680 22:12:20 version -- app/version.sh@19 -- # patch=0 00:06:30.680 22:12:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:30.680 22:12:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # cut -f2 00:06:30.680 22:12:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.680 22:12:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:30.680 22:12:20 version -- app/version.sh@22 -- # version=25.1 00:06:30.680 22:12:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.680 22:12:20 version -- app/version.sh@28 -- # version=25.1rc0 00:06:30.680 22:12:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:30.680 22:12:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.680 22:12:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:30.680 22:12:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:30.680 00:06:30.680 real 0m0.247s 00:06:30.680 user 0m0.152s 00:06:30.680 sys 0m0.137s 00:06:30.680 22:12:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.680 22:12:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:30.680 ************************************ 00:06:30.680 END TEST version 00:06:30.681 ************************************ 00:06:30.681 22:12:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:30.681 22:12:20 -- spdk/autotest.sh@194 -- # uname -s 00:06:30.681 22:12:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:30.681 22:12:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:30.681 22:12:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:30.681 22:12:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:30.681 22:12:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.681 22:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:30.681 22:12:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:30.681 22:12:20 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:30.681 22:12:20 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:30.681 22:12:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.681 22:12:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.681 22:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:30.681 ************************************ 00:06:30.681 START TEST nvmf_tcp 00:06:30.681 ************************************ 00:06:30.681 22:12:20 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:30.941 * Looking for test storage... 00:06:30.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.941 22:12:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.941 --rc genhtml_branch_coverage=1 00:06:30.941 --rc genhtml_function_coverage=1 00:06:30.941 --rc genhtml_legend=1 00:06:30.941 --rc geninfo_all_blocks=1 00:06:30.941 --rc geninfo_unexecuted_blocks=1 00:06:30.941 00:06:30.941 ' 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.941 --rc genhtml_branch_coverage=1 00:06:30.941 --rc genhtml_function_coverage=1 00:06:30.941 --rc genhtml_legend=1 00:06:30.941 --rc geninfo_all_blocks=1 00:06:30.941 --rc geninfo_unexecuted_blocks=1 00:06:30.941 00:06:30.941 ' 00:06:30.941 22:12:20 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.941 --rc genhtml_branch_coverage=1 00:06:30.941 --rc genhtml_function_coverage=1 00:06:30.941 --rc genhtml_legend=1 00:06:30.941 --rc geninfo_all_blocks=1 00:06:30.941 --rc geninfo_unexecuted_blocks=1 00:06:30.941 00:06:30.941 ' 00:06:30.942 22:12:20 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.942 --rc genhtml_branch_coverage=1 00:06:30.942 --rc genhtml_function_coverage=1 00:06:30.942 --rc genhtml_legend=1 00:06:30.942 --rc geninfo_all_blocks=1 00:06:30.942 --rc geninfo_unexecuted_blocks=1 00:06:30.942 00:06:30.942 ' 00:06:30.942 22:12:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:30.942 22:12:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:30.942 22:12:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:30.942 22:12:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.942 22:12:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.942 22:12:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.942 ************************************ 00:06:30.942 START TEST nvmf_target_core 00:06:30.942 ************************************ 00:06:30.942 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:30.942 * Looking for test storage... 00:06:30.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:30.942 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.942 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.942 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.208 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.208 --rc genhtml_branch_coverage=1 00:06:31.208 --rc genhtml_function_coverage=1 00:06:31.208 --rc genhtml_legend=1 00:06:31.208 --rc geninfo_all_blocks=1 00:06:31.208 --rc geninfo_unexecuted_blocks=1 00:06:31.208 00:06:31.209 ' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.209 --rc genhtml_branch_coverage=1 00:06:31.209 --rc genhtml_function_coverage=1 00:06:31.209 --rc genhtml_legend=1 00:06:31.209 --rc geninfo_all_blocks=1 00:06:31.209 --rc geninfo_unexecuted_blocks=1 00:06:31.209 00:06:31.209 ' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.209 --rc genhtml_branch_coverage=1 00:06:31.209 --rc genhtml_function_coverage=1 00:06:31.209 --rc genhtml_legend=1 00:06:31.209 --rc geninfo_all_blocks=1 00:06:31.209 --rc geninfo_unexecuted_blocks=1 00:06:31.209 00:06:31.209 ' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.209 --rc genhtml_branch_coverage=1 00:06:31.209 --rc genhtml_function_coverage=1 00:06:31.209 --rc genhtml_legend=1 00:06:31.209 --rc geninfo_all_blocks=1 00:06:31.209 --rc geninfo_unexecuted_blocks=1 00:06:31.209 00:06:31.209 ' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.209 ************************************ 00:06:31.209 START TEST nvmf_abort 00:06:31.209 ************************************ 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:31.209 * Looking for test storage... 00:06:31.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.209 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.470 --rc genhtml_branch_coverage=1 00:06:31.470 --rc genhtml_function_coverage=1 00:06:31.470 --rc genhtml_legend=1 00:06:31.470 --rc geninfo_all_blocks=1 00:06:31.470 --rc geninfo_unexecuted_blocks=1 00:06:31.470 00:06:31.470 ' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.470 --rc genhtml_branch_coverage=1 00:06:31.470 --rc genhtml_function_coverage=1 00:06:31.470 --rc genhtml_legend=1 00:06:31.470 --rc geninfo_all_blocks=1 00:06:31.470 --rc geninfo_unexecuted_blocks=1 00:06:31.470 00:06:31.470 ' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.470 --rc genhtml_branch_coverage=1 00:06:31.470 --rc genhtml_function_coverage=1 00:06:31.470 --rc genhtml_legend=1 00:06:31.470 --rc geninfo_all_blocks=1 00:06:31.470 --rc geninfo_unexecuted_blocks=1 00:06:31.470 00:06:31.470 ' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.470 --rc genhtml_branch_coverage=1 00:06:31.470 --rc genhtml_function_coverage=1 00:06:31.470 --rc genhtml_legend=1 00:06:31.470 --rc geninfo_all_blocks=1 00:06:31.470 --rc geninfo_unexecuted_blocks=1 00:06:31.470 00:06:31.470 ' 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.470 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.471 22:12:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.471 22:12:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:38.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:38.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:38.053 Found net devices under 0000:af:00.0: cvl_0_0 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.053 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:38.054 Found net devices under 0000:af:00.1: cvl_0_1 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.054 22:12:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:38.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:06:38.054 00:06:38.054 --- 10.0.0.2 ping statistics --- 00:06:38.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.054 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:06:38.054 00:06:38.054 --- 10.0.0.1 ping statistics --- 00:06:38.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.054 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=122893 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 122893 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 122893 ']' 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 [2024-12-16 22:12:27.121932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:38.054 [2024-12-16 22:12:27.121976] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.054 [2024-12-16 22:12:27.198019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.054 [2024-12-16 22:12:27.221613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.054 [2024-12-16 22:12:27.221648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.054 [2024-12-16 22:12:27.221655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.054 [2024-12-16 22:12:27.221661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.054 [2024-12-16 22:12:27.221666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.054 [2024-12-16 22:12:27.223000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.054 [2024-12-16 22:12:27.223096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.054 [2024-12-16 22:12:27.223096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 [2024-12-16 22:12:27.355064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 Malloc0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 Delay0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.054 [2024-12-16 22:12:27.433798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.054 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.055 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.055 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.055 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.055 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.055 22:12:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:38.055 [2024-12-16 22:12:27.566984] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:39.963 Initializing NVMe Controllers 00:06:39.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.963 controller IO queue size 128 less than required 00:06:39.963 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:39.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:39.963 Initialization complete. Launching workers. 00:06:39.963 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 38676 00:06:39.963 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38738, failed to submit 62 00:06:39.963 success 38680, unsuccessful 58, failed 0 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.963 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.223 rmmod nvme_tcp 00:06:40.223 rmmod nvme_fabrics 00:06:40.223 rmmod nvme_keyring 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 122893 ']' 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 122893 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 122893 ']' 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 122893 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122893 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122893' 00:06:40.223 killing process with pid 122893 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 122893 00:06:40.223 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 122893 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:40.482 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:40.483 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:40.483 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:40.483 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.483 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.483 22:12:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:42.389 00:06:42.389 real 0m11.222s 00:06:42.389 user 0m11.759s 00:06:42.389 sys 0m5.208s 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.389 ************************************ 00:06:42.389 END TEST nvmf_abort 00:06:42.389 ************************************ 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:42.389 ************************************ 00:06:42.389 START TEST nvmf_ns_hotplug_stress 00:06:42.389 ************************************ 00:06:42.389 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:42.649 * Looking for test storage... 00:06:42.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.649 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.650 --rc genhtml_branch_coverage=1 00:06:42.650 --rc genhtml_function_coverage=1 00:06:42.650 --rc genhtml_legend=1 00:06:42.650 --rc geninfo_all_blocks=1 00:06:42.650 --rc geninfo_unexecuted_blocks=1 00:06:42.650 00:06:42.650 ' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.650 --rc genhtml_branch_coverage=1 00:06:42.650 --rc genhtml_function_coverage=1 00:06:42.650 --rc genhtml_legend=1 00:06:42.650 --rc geninfo_all_blocks=1 00:06:42.650 --rc geninfo_unexecuted_blocks=1 00:06:42.650 00:06:42.650 ' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.650 --rc genhtml_branch_coverage=1 00:06:42.650 --rc genhtml_function_coverage=1 00:06:42.650 --rc genhtml_legend=1 00:06:42.650 --rc geninfo_all_blocks=1 00:06:42.650 --rc geninfo_unexecuted_blocks=1 00:06:42.650 00:06:42.650 ' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.650 --rc genhtml_branch_coverage=1 00:06:42.650 --rc genhtml_function_coverage=1 00:06:42.650 --rc genhtml_legend=1 00:06:42.650 --rc geninfo_all_blocks=1 00:06:42.650 --rc geninfo_unexecuted_blocks=1 00:06:42.650 00:06:42.650 ' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:42.650 22:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:49.226 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:49.226 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:49.226 Found net devices under 0000:af:00.0: cvl_0_0 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:49.226 Found net devices under 0000:af:00.1: cvl_0_1 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.226 22:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.226 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:49.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:06:49.227 00:06:49.227 --- 10.0.0.2 ping statistics --- 00:06:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.227 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:06:49.227 00:06:49.227 --- 10.0.0.1 ping statistics --- 00:06:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.227 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126927 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126927 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126927 ']' 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.227 [2024-12-16 22:12:38.342156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:49.227 [2024-12-16 22:12:38.342213] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.227 [2024-12-16 22:12:38.420776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.227 [2024-12-16 22:12:38.442962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.227 [2024-12-16 22:12:38.442999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.227 [2024-12-16 22:12:38.443006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.227 [2024-12-16 22:12:38.443012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.227 [2024-12-16 22:12:38.443017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.227 [2024-12-16 22:12:38.444369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.227 [2024-12-16 22:12:38.444475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.227 [2024-12-16 22:12:38.444477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:49.227 [2024-12-16 22:12:38.744639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.227 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:49.486 22:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.486 [2024-12-16 22:12:39.142052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.486 22:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:49.745 22:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:50.005 Malloc0 00:06:50.005 22:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:50.264 Delay0 00:06:50.264 22:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.522 22:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:50.523 NULL1 00:06:50.523 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:50.782 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:50.782 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=127314 00:06:50.782 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:50.782 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.040 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.299 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:51.299 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:51.299 true 00:06:51.299 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:51.299 22:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.558 22:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.818 22:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:51.818 22:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:52.076 true 00:06:52.076 22:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:52.076 22:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.334 22:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.594 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:52.594 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:52.594 true 00:06:52.854 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:52.854 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.854 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.113 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:53.113 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:53.371 true 00:06:53.372 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:53.372 22:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.631 22:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.890 22:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:53.890 22:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:54.149 true 00:06:54.149 22:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:54.149 22:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.149 22:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.408 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:54.408 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:54.667 true 00:06:54.667 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:54.667 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.926 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.185 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:55.185 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:55.185 true 00:06:55.443 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:55.443 22:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.443 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.701 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:55.701 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:55.959 true 00:06:55.959 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:55.959 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.218 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.477 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:56.477 22:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:56.477 true 00:06:56.736 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:56.736 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.736 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.995 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:56.995 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:57.254 true 00:06:57.254 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:57.254 22:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.514 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.773 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:57.773 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:57.773 true 00:06:57.773 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:57.773 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.032 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.292 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:58.292 22:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:58.550 true 00:06:58.550 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:58.550 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.809 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.069 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:59.069 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:59.069 true 00:06:59.069 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:59.069 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.328 22:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.587 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:59.587 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:59.846 true 00:06:59.846 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:06:59.846 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.106 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.365 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:00.365 22:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:00.365 true 00:07:00.365 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:00.365 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.624 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.883 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:00.883 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:01.142 true 00:07:01.142 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:01.142 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.401 22:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.662 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:01.662 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:01.662 true 00:07:01.662 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:01.662 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.921 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.181 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:02.181 22:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:02.440 true 00:07:02.440 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:02.440 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.700 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.959 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:02.960 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:02.960 true 00:07:03.218 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:03.218 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.218 22:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.477 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:03.477 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:03.735 true 00:07:03.735 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:03.735 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.994 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.254 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:04.254 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:04.514 true 00:07:04.514 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:04.514 22:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.514 22:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.774 22:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:04.774 22:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:05.033 true 00:07:05.033 22:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:05.033 22:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.292 22:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.552 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:05.552 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:05.812 true 00:07:05.812 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:05.812 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.812 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.071 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:06.071 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:06.331 true 00:07:06.331 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:06.331 22:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.590 22:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.850 22:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:06.850 22:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:07.110 true 00:07:07.110 22:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:07.110 22:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.110 22:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.369 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:07.369 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:07.629 true 00:07:07.629 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:07.629 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.889 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.148 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:08.148 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:08.408 true 00:07:08.408 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:08.408 22:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.668 22:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.928 22:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:08.928 22:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:08.928 true 00:07:08.928 22:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:08.928 22:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.188 22:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.449 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:09.449 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:09.709 true 00:07:09.709 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:09.709 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.969 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.969 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:09.969 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:10.229 true 00:07:10.229 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:10.229 22:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.488 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.747 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:10.747 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:11.007 true 00:07:11.007 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:11.007 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.267 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.526 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:11.526 22:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:11.526 true 00:07:11.526 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:11.526 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.785 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.045 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:12.045 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:12.304 true 00:07:12.304 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:12.304 22:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.564 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.824 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:12.824 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:12.824 true 00:07:12.824 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:12.824 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.083 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.343 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:13.343 22:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:13.602 true 00:07:13.602 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:13.602 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.862 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.122 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:14.122 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:14.122 true 00:07:14.122 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:14.122 22:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.382 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.642 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:14.642 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:14.901 true 00:07:14.901 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:14.901 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.161 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.420 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:15.420 22:13:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:15.420 true 00:07:15.420 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:15.420 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.694 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.956 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:15.956 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:16.216 true 00:07:16.216 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:16.216 22:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.475 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.734 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:16.734 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:16.734 true 00:07:16.734 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:16.734 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.994 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.254 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:17.254 22:13:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:17.514 true 00:07:17.514 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:17.514 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.773 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.033 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:18.033 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:18.033 true 00:07:18.292 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:18.292 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.292 22:13:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.552 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:18.552 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:18.811 true 00:07:18.811 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:18.811 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.071 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.330 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:19.330 22:13:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:19.589 true 00:07:19.589 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:19.589 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.848 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.848 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:19.848 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:20.107 true 00:07:20.107 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:20.107 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.366 22:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.625 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:20.625 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:20.885 true 00:07:20.885 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:20.885 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.145 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.145 Initializing NVMe Controllers 00:07:21.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:21.145 Controller IO queue size 128, less than required. 00:07:21.145 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:21.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:21.145 Initialization complete. Launching workers. 00:07:21.145 ======================================================== 00:07:21.145 Latency(us) 00:07:21.145 Device Information : IOPS MiB/s Average min max 00:07:21.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27511.90 13.43 4652.29 2311.50 8614.89 00:07:21.145 ======================================================== 00:07:21.145 Total : 27511.90 13.43 4652.29 2311.50 8614.89 00:07:21.145 00:07:21.145 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:21.145 22:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:21.404 true 00:07:21.404 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127314 00:07:21.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (127314) - No such process 00:07:21.404 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 127314 00:07:21.404 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.663 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.922 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:21.922 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:21.922 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:21.922 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.922 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:21.922 null0 00:07:22.181 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.181 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.181 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:22.181 null1 00:07:22.181 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.181 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.181 22:13:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:22.440 null2 00:07:22.440 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.440 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.440 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:22.699 null3 00:07:22.699 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.699 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.699 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:22.958 null4 00:07:22.958 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.958 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.958 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:22.958 null5 00:07:22.958 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.958 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.958 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:23.218 null6 00:07:23.218 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.218 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.218 22:13:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:23.478 null7 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:23.478 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 132828 132829 132831 132833 132835 132837 132838 132840 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.479 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.738 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.996 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.997 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.997 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.997 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.997 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.997 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.256 22:13:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.516 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.776 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.036 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.295 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.295 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.296 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.296 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.296 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.296 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.296 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.296 22:13:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.556 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.557 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.817 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.075 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.075 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.075 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.075 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.076 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.335 22:13:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.594 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.853 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.854 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.114 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.374 22:13:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.633 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.634 rmmod nvme_tcp 00:07:27.634 rmmod nvme_fabrics 00:07:27.634 rmmod nvme_keyring 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126927 ']' 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126927 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126927 ']' 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126927 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126927 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126927' 00:07:27.634 killing process with pid 126927 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126927 00:07:27.634 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126927 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.894 22:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.435 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.435 00:07:30.435 real 0m47.461s 00:07:30.435 user 3m22.540s 00:07:30.435 sys 0m16.942s 00:07:30.435 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.435 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:30.435 ************************************ 00:07:30.435 END TEST nvmf_ns_hotplug_stress 00:07:30.435 ************************************ 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.436 ************************************ 00:07:30.436 START TEST nvmf_delete_subsystem 00:07:30.436 ************************************ 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:30.436 * Looking for test storage... 00:07:30.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.436 --rc genhtml_branch_coverage=1 00:07:30.436 --rc genhtml_function_coverage=1 00:07:30.436 --rc genhtml_legend=1 00:07:30.436 --rc geninfo_all_blocks=1 00:07:30.436 --rc geninfo_unexecuted_blocks=1 00:07:30.436 00:07:30.436 ' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.436 --rc genhtml_branch_coverage=1 00:07:30.436 --rc genhtml_function_coverage=1 00:07:30.436 --rc genhtml_legend=1 00:07:30.436 --rc geninfo_all_blocks=1 00:07:30.436 --rc geninfo_unexecuted_blocks=1 00:07:30.436 00:07:30.436 ' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.436 --rc genhtml_branch_coverage=1 00:07:30.436 --rc genhtml_function_coverage=1 00:07:30.436 --rc genhtml_legend=1 00:07:30.436 --rc geninfo_all_blocks=1 00:07:30.436 --rc geninfo_unexecuted_blocks=1 00:07:30.436 00:07:30.436 ' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.436 --rc genhtml_branch_coverage=1 00:07:30.436 --rc genhtml_function_coverage=1 00:07:30.436 --rc genhtml_legend=1 00:07:30.436 --rc geninfo_all_blocks=1 00:07:30.436 --rc geninfo_unexecuted_blocks=1 00:07:30.436 00:07:30.436 ' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.436 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.437 22:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.013 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:37.014 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:37.014 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:37.014 Found net devices under 0000:af:00.0: cvl_0_0 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:37.014 Found net devices under 0000:af:00.1: cvl_0_1 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:07:37.014 00:07:37.014 --- 10.0.0.2 ping statistics --- 00:07:37.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.014 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:07:37.014 00:07:37.014 --- 10.0.0.1 ping statistics --- 00:07:37.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.014 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:37.014 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=137262 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 137262 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 137262 ']' 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.015 22:13:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 [2024-12-16 22:13:25.817142] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:37.015 [2024-12-16 22:13:25.817189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.015 [2024-12-16 22:13:25.895084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.015 [2024-12-16 22:13:25.917237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.015 [2024-12-16 22:13:25.917272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.015 [2024-12-16 22:13:25.917279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.015 [2024-12-16 22:13:25.917285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.015 [2024-12-16 22:13:25.917289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.015 [2024-12-16 22:13:25.918445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.015 [2024-12-16 22:13:25.918446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 [2024-12-16 22:13:26.050014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 [2024-12-16 22:13:26.070204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 NULL1 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 Delay0 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=137376 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:37.015 22:13:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.015 [2024-12-16 22:13:26.182057] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:38.922 22:13:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.922 22:13:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.922 22:13:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 starting I/O failed: -6 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.922 Write completed with error (sct=0, sc=8) 00:07:38.922 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 [2024-12-16 22:13:28.296734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296920 is same with the state(6) to be set 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 [2024-12-16 22:13:28.297725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242140 is same with the state(6) to be set 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 starting I/O failed: -6 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 [2024-12-16 22:13:28.301325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d8c00d4d0 is same with the state(6) to be set 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:38.923 Read completed with error (sct=0, sc=8) 00:07:38.923 Write completed with error (sct=0, sc=8) 00:07:39.860 [2024-12-16 22:13:29.275031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223f260 is same with the state(6) to be set 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 [2024-12-16 22:13:29.299897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2241c60 is same with the state(6) to be set 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 [2024-12-16 22:13:29.300174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22965f0 is same with the state(6) to be set 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.860 [2024-12-16 22:13:29.303438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d8c00d800 is same with the state(6) to be set 00:07:39.860 Read completed with error (sct=0, sc=8) 00:07:39.860 Write completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Write completed with error (sct=0, sc=8) 00:07:39.861 Read completed with error (sct=0, sc=8) 00:07:39.861 [2024-12-16 22:13:29.304040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d8c00d060 is same with the state(6) to be set 00:07:39.861 Initializing NVMe Controllers 00:07:39.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.861 Controller IO queue size 128, less than required. 00:07:39.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:39.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:39.861 Initialization complete. Launching workers. 00:07:39.861 ======================================================== 00:07:39.861 Latency(us) 00:07:39.861 Device Information : IOPS MiB/s Average min max 00:07:39.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.85 0.08 918871.99 998.79 1005897.87 00:07:39.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.34 0.08 915757.60 235.95 2000922.53 00:07:39.861 ======================================================== 00:07:39.861 Total : 323.19 0.16 917298.00 235.95 2000922.53 00:07:39.861 00:07:39.861 [2024-12-16 22:13:29.304577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223f260 (9): Bad file descriptor 00:07:39.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:39.861 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.861 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:39.861 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137376 00:07:39.861 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:40.120 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:40.120 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137376 00:07:40.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (137376) - No such process 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 137376 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 137376 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 137376 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.121 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.380 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.380 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.380 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.380 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.380 [2024-12-16 22:13:29.831599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.380 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=137961 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:40.381 22:13:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.381 [2024-12-16 22:13:29.913582] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:40.949 22:13:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.949 22:13:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:40.949 22:13:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.208 22:13:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.208 22:13:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:41.208 22:13:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.777 22:13:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.777 22:13:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:41.777 22:13:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.346 22:13:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.346 22:13:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:42.346 22:13:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.915 22:13:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.915 22:13:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:42.915 22:13:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.174 22:13:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.174 22:13:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:43.174 22:13:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.743 Initializing NVMe Controllers 00:07:43.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:43.743 Controller IO queue size 128, less than required. 00:07:43.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:43.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:43.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:43.743 Initialization complete. Launching workers. 00:07:43.743 ======================================================== 00:07:43.743 Latency(us) 00:07:43.743 Device Information : IOPS MiB/s Average min max 00:07:43.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002147.81 1000119.95 1007829.99 00:07:43.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003885.92 1000453.04 1010048.72 00:07:43.743 ======================================================== 00:07:43.743 Total : 256.00 0.12 1003016.87 1000119.95 1010048.72 00:07:43.743 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137961 00:07:43.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (137961) - No such process 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 137961 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.743 rmmod nvme_tcp 00:07:43.743 rmmod nvme_fabrics 00:07:43.743 rmmod nvme_keyring 00:07:43.743 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 137262 ']' 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 137262 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 137262 ']' 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 137262 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137262 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137262' 00:07:44.003 killing process with pid 137262 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 137262 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 137262 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.003 22:13:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.544 00:07:46.544 real 0m16.098s 00:07:46.544 user 0m29.340s 00:07:46.544 sys 0m5.383s 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.544 ************************************ 00:07:46.544 END TEST nvmf_delete_subsystem 00:07:46.544 ************************************ 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.544 ************************************ 00:07:46.544 START TEST nvmf_host_management 00:07:46.544 ************************************ 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:46.544 * Looking for test storage... 00:07:46.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.544 --rc genhtml_branch_coverage=1 00:07:46.544 --rc genhtml_function_coverage=1 00:07:46.544 --rc genhtml_legend=1 00:07:46.544 --rc geninfo_all_blocks=1 00:07:46.544 --rc geninfo_unexecuted_blocks=1 00:07:46.544 00:07:46.544 ' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.544 --rc genhtml_branch_coverage=1 00:07:46.544 --rc genhtml_function_coverage=1 00:07:46.544 --rc genhtml_legend=1 00:07:46.544 --rc geninfo_all_blocks=1 00:07:46.544 --rc geninfo_unexecuted_blocks=1 00:07:46.544 00:07:46.544 ' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.544 --rc genhtml_branch_coverage=1 00:07:46.544 --rc genhtml_function_coverage=1 00:07:46.544 --rc genhtml_legend=1 00:07:46.544 --rc geninfo_all_blocks=1 00:07:46.544 --rc geninfo_unexecuted_blocks=1 00:07:46.544 00:07:46.544 ' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.544 --rc genhtml_branch_coverage=1 00:07:46.544 --rc genhtml_function_coverage=1 00:07:46.544 --rc genhtml_legend=1 00:07:46.544 --rc geninfo_all_blocks=1 00:07:46.544 --rc geninfo_unexecuted_blocks=1 00:07:46.544 00:07:46.544 ' 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.544 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.545 22:13:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.545 22:13:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:53.124 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:53.124 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:53.124 Found net devices under 0000:af:00.0: cvl_0_0 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:53.124 Found net devices under 0000:af:00.1: cvl_0_1 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.124 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:07:53.125 00:07:53.125 --- 10.0.0.2 ping statistics --- 00:07:53.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.125 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:07:53.125 22:13:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:53.125 00:07:53.125 --- 10.0.0.1 ping statistics --- 00:07:53.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.125 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=142004 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 142004 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142004 ']' 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 [2024-12-16 22:13:42.098184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:53.125 [2024-12-16 22:13:42.098234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.125 [2024-12-16 22:13:42.176507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.125 [2024-12-16 22:13:42.199080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.125 [2024-12-16 22:13:42.199116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.125 [2024-12-16 22:13:42.199123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.125 [2024-12-16 22:13:42.199129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.125 [2024-12-16 22:13:42.199133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.125 [2024-12-16 22:13:42.200490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.125 [2024-12-16 22:13:42.200599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.125 [2024-12-16 22:13:42.200707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.125 [2024-12-16 22:13:42.200709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 [2024-12-16 22:13:42.340466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 Malloc0 00:07:53.125 [2024-12-16 22:13:42.417152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=142250 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 142250 /var/tmp/bdevperf.sock 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142250 ']' 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.125 { 00:07:53.125 "params": { 00:07:53.125 "name": "Nvme$subsystem", 00:07:53.125 "trtype": "$TEST_TRANSPORT", 00:07:53.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.125 "adrfam": "ipv4", 00:07:53.125 "trsvcid": "$NVMF_PORT", 00:07:53.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.125 "hdgst": ${hdgst:-false}, 00:07:53.125 "ddgst": ${ddgst:-false} 00:07:53.125 }, 00:07:53.125 "method": "bdev_nvme_attach_controller" 00:07:53.125 } 00:07:53.125 EOF 00:07:53.125 )") 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:53.125 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.125 "params": { 00:07:53.125 "name": "Nvme0", 00:07:53.125 "trtype": "tcp", 00:07:53.125 "traddr": "10.0.0.2", 00:07:53.125 "adrfam": "ipv4", 00:07:53.125 "trsvcid": "4420", 00:07:53.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.125 "hdgst": false, 00:07:53.125 "ddgst": false 00:07:53.125 }, 00:07:53.125 "method": "bdev_nvme_attach_controller" 00:07:53.125 }' 00:07:53.125 [2024-12-16 22:13:42.509572] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:53.125 [2024-12-16 22:13:42.509617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142250 ] 00:07:53.125 [2024-12-16 22:13:42.582671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.125 [2024-12-16 22:13:42.604931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.385 Running I/O for 10 seconds... 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:07:53.385 22:13:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.646 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.646 [2024-12-16 22:13:43.294667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.646 [2024-12-16 22:13:43.294706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.646 [2024-12-16 22:13:43.294724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.646 [2024-12-16 22:13:43.294738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.646 [2024-12-16 22:13:43.294752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1d40 is same with the state(6) to be set 00:07:53.646 [2024-12-16 22:13:43.294828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.294987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.294993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.295001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.295008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.295016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.295022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.295030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.295038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.295046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.295061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.646 [2024-12-16 22:13:43.295068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.646 [2024-12-16 22:13:43.295075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.647 [2024-12-16 22:13:43.295615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.647 [2024-12-16 22:13:43.295623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.295768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.648 [2024-12-16 22:13:43.295774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.648 [2024-12-16 22:13:43.296702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:53.648 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:53.648 00:07:53.648 Latency(us) 00:07:53.648 [2024-12-16T21:13:43.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.648 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.648 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:53.648 Verification LBA range: start 0x0 length 0x400 00:07:53.648 Nvme0n1 : 0.41 2034.13 127.13 156.47 0.00 28436.90 1412.14 26588.89 00:07:53.648 [2024-12-16T21:13:43.349Z] =================================================================================================================== 00:07:53.648 [2024-12-16T21:13:43.349Z] Total : 2034.13 127.13 156.47 0.00 28436.90 1412.14 26588.89 00:07:53.648 [2024-12-16 22:13:43.299014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.648 [2024-12-16 22:13:43.299034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a1d40 (9): Bad file descriptor 00:07:53.648 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.648 22:13:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:53.648 [2024-12-16 22:13:43.311795] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 142250 00:07:55.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (142250) - No such process 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.027 { 00:07:55.027 "params": { 00:07:55.027 "name": "Nvme$subsystem", 00:07:55.027 "trtype": "$TEST_TRANSPORT", 00:07:55.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.027 "adrfam": "ipv4", 00:07:55.027 "trsvcid": "$NVMF_PORT", 00:07:55.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.027 "hdgst": ${hdgst:-false}, 00:07:55.027 "ddgst": ${ddgst:-false} 00:07:55.027 }, 00:07:55.027 "method": "bdev_nvme_attach_controller" 00:07:55.027 } 00:07:55.027 EOF 00:07:55.027 )") 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:55.027 22:13:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.027 "params": { 00:07:55.027 "name": "Nvme0", 00:07:55.027 "trtype": "tcp", 00:07:55.027 "traddr": "10.0.0.2", 00:07:55.027 "adrfam": "ipv4", 00:07:55.027 "trsvcid": "4420", 00:07:55.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.027 "hdgst": false, 00:07:55.027 "ddgst": false 00:07:55.027 }, 00:07:55.027 "method": "bdev_nvme_attach_controller" 00:07:55.027 }' 00:07:55.027 [2024-12-16 22:13:44.355833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:55.027 [2024-12-16 22:13:44.355893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142488 ] 00:07:55.027 [2024-12-16 22:13:44.432077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.027 [2024-12-16 22:13:44.453232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.027 Running I/O for 1 seconds... 00:07:56.405 2048.00 IOPS, 128.00 MiB/s 00:07:56.405 Latency(us) 00:07:56.405 [2024-12-16T21:13:46.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.405 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.405 Verification LBA range: start 0x0 length 0x400 00:07:56.405 Nvme0n1 : 1.02 2071.77 129.49 0.00 0.00 30410.00 4088.20 26963.38 00:07:56.405 [2024-12-16T21:13:46.106Z] =================================================================================================================== 00:07:56.405 [2024-12-16T21:13:46.106Z] Total : 2071.77 129.49 0.00 0.00 30410.00 4088.20 26963.38 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.405 rmmod nvme_tcp 00:07:56.405 rmmod nvme_fabrics 00:07:56.405 rmmod nvme_keyring 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 142004 ']' 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 142004 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 142004 ']' 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 142004 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142004 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.405 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142004' 00:07:56.406 killing process with pid 142004 00:07:56.406 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 142004 00:07:56.406 22:13:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 142004 00:07:56.666 [2024-12-16 22:13:46.141822] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.666 22:13:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.575 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.575 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:58.575 00:07:58.575 real 0m12.445s 00:07:58.575 user 0m19.841s 00:07:58.575 sys 0m5.476s 00:07:58.575 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.575 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.575 ************************************ 00:07:58.575 END TEST nvmf_host_management 00:07:58.575 ************************************ 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.836 ************************************ 00:07:58.836 START TEST nvmf_lvol 00:07:58.836 ************************************ 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:58.836 * Looking for test storage... 00:07:58.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.836 --rc genhtml_branch_coverage=1 00:07:58.836 --rc genhtml_function_coverage=1 00:07:58.836 --rc genhtml_legend=1 00:07:58.836 --rc geninfo_all_blocks=1 00:07:58.836 --rc geninfo_unexecuted_blocks=1 00:07:58.836 00:07:58.836 ' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.836 --rc genhtml_branch_coverage=1 00:07:58.836 --rc genhtml_function_coverage=1 00:07:58.836 --rc genhtml_legend=1 00:07:58.836 --rc geninfo_all_blocks=1 00:07:58.836 --rc geninfo_unexecuted_blocks=1 00:07:58.836 00:07:58.836 ' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.836 --rc genhtml_branch_coverage=1 00:07:58.836 --rc genhtml_function_coverage=1 00:07:58.836 --rc genhtml_legend=1 00:07:58.836 --rc geninfo_all_blocks=1 00:07:58.836 --rc geninfo_unexecuted_blocks=1 00:07:58.836 00:07:58.836 ' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.836 --rc genhtml_branch_coverage=1 00:07:58.836 --rc genhtml_function_coverage=1 00:07:58.836 --rc genhtml_legend=1 00:07:58.836 --rc geninfo_all_blocks=1 00:07:58.836 --rc geninfo_unexecuted_blocks=1 00:07:58.836 00:07:58.836 ' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.836 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.837 22:13:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:05.414 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:05.414 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:05.414 Found net devices under 0000:af:00.0: cvl_0_0 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:05.414 Found net devices under 0000:af:00.1: cvl_0_1 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:05.414 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:05.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:08:05.415 00:08:05.415 --- 10.0.0.2 ping statistics --- 00:08:05.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.415 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:08:05.415 00:08:05.415 --- 10.0.0.1 ping statistics --- 00:08:05.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.415 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=146220 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 146220 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 146220 ']' 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.415 [2024-12-16 22:13:54.512869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:05.415 [2024-12-16 22:13:54.512919] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.415 [2024-12-16 22:13:54.590229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.415 [2024-12-16 22:13:54.613702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.415 [2024-12-16 22:13:54.613737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.415 [2024-12-16 22:13:54.613745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.415 [2024-12-16 22:13:54.613752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.415 [2024-12-16 22:13:54.613758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.415 [2024-12-16 22:13:54.618211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.415 [2024-12-16 22:13:54.618239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.415 [2024-12-16 22:13:54.618238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:05.415 [2024-12-16 22:13:54.918136] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.415 22:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.674 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:05.674 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:05.933 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:05.933 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:05.933 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:06.192 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=23ba71f5-206e-44f7-84d8-408c782d9527 00:08:06.192 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 23ba71f5-206e-44f7-84d8-408c782d9527 lvol 20 00:08:06.451 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b8d6aaef-6ce1-4e59-acbc-6e1f66db9b3c 00:08:06.452 22:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.711 22:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8d6aaef-6ce1-4e59-acbc-6e1f66db9b3c 00:08:06.711 22:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:06.970 [2024-12-16 22:13:56.541553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.970 22:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.230 22:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:07.230 22:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=146684 00:08:07.230 22:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:08.166 22:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b8d6aaef-6ce1-4e59-acbc-6e1f66db9b3c MY_SNAPSHOT 00:08:08.425 22:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=777d92b1-7668-40c9-b468-0f2f974b7a82 00:08:08.425 22:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b8d6aaef-6ce1-4e59-acbc-6e1f66db9b3c 30 00:08:08.684 22:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 777d92b1-7668-40c9-b468-0f2f974b7a82 MY_CLONE 00:08:08.942 22:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a7e18156-6d3a-47b6-847c-24920eee98a8 00:08:08.942 22:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a7e18156-6d3a-47b6-847c-24920eee98a8 00:08:09.510 22:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 146684 00:08:17.636 Initializing NVMe Controllers 00:08:17.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:17.636 Controller IO queue size 128, less than required. 00:08:17.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:17.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:17.636 Initialization complete. Launching workers. 00:08:17.636 ======================================================== 00:08:17.636 Latency(us) 00:08:17.636 Device Information : IOPS MiB/s Average min max 00:08:17.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12373.30 48.33 10345.27 1494.76 60269.64 00:08:17.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12224.10 47.75 10473.36 3435.26 57942.08 00:08:17.636 ======================================================== 00:08:17.636 Total : 24597.40 96.08 10408.93 1494.76 60269.64 00:08:17.636 00:08:17.636 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.895 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8d6aaef-6ce1-4e59-acbc-6e1f66db9b3c 00:08:17.895 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23ba71f5-206e-44f7-84d8-408c782d9527 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.154 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.154 rmmod nvme_tcp 00:08:18.154 rmmod nvme_fabrics 00:08:18.154 rmmod nvme_keyring 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 146220 ']' 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 146220 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 146220 ']' 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 146220 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146220 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146220' 00:08:18.413 killing process with pid 146220 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 146220 00:08:18.413 22:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 146220 00:08:18.413 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.413 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.413 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.413 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:18.414 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:18.414 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.414 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.673 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.673 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.673 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.673 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.673 22:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.582 00:08:20.582 real 0m21.871s 00:08:20.582 user 1m3.156s 00:08:20.582 sys 0m7.507s 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.582 ************************************ 00:08:20.582 END TEST nvmf_lvol 00:08:20.582 ************************************ 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.582 ************************************ 00:08:20.582 START TEST nvmf_lvs_grow 00:08:20.582 ************************************ 00:08:20.582 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.842 * Looking for test storage... 00:08:20.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.842 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.843 --rc genhtml_branch_coverage=1 00:08:20.843 --rc genhtml_function_coverage=1 00:08:20.843 --rc genhtml_legend=1 00:08:20.843 --rc geninfo_all_blocks=1 00:08:20.843 --rc geninfo_unexecuted_blocks=1 00:08:20.843 00:08:20.843 ' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.843 --rc genhtml_branch_coverage=1 00:08:20.843 --rc genhtml_function_coverage=1 00:08:20.843 --rc genhtml_legend=1 00:08:20.843 --rc geninfo_all_blocks=1 00:08:20.843 --rc geninfo_unexecuted_blocks=1 00:08:20.843 00:08:20.843 ' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.843 --rc genhtml_branch_coverage=1 00:08:20.843 --rc genhtml_function_coverage=1 00:08:20.843 --rc genhtml_legend=1 00:08:20.843 --rc geninfo_all_blocks=1 00:08:20.843 --rc geninfo_unexecuted_blocks=1 00:08:20.843 00:08:20.843 ' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.843 --rc genhtml_branch_coverage=1 00:08:20.843 --rc genhtml_function_coverage=1 00:08:20.843 --rc genhtml_legend=1 00:08:20.843 --rc geninfo_all_blocks=1 00:08:20.843 --rc geninfo_unexecuted_blocks=1 00:08:20.843 00:08:20.843 ' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.843 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.844 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.844 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.844 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.844 22:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:27.419 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:27.419 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:27.419 Found net devices under 0000:af:00.0: cvl_0_0 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:27.419 Found net devices under 0000:af:00.1: cvl_0_1 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.419 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:08:27.419 00:08:27.419 --- 10.0.0.2 ping statistics --- 00:08:27.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.420 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:27.420 00:08:27.420 --- 10.0.0.1 ping statistics --- 00:08:27.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.420 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=152029 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 152029 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 152029 ']' 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.420 [2024-12-16 22:14:16.464244] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:27.420 [2024-12-16 22:14:16.464294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.420 [2024-12-16 22:14:16.541081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.420 [2024-12-16 22:14:16.563174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.420 [2024-12-16 22:14:16.563214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.420 [2024-12-16 22:14:16.563221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.420 [2024-12-16 22:14:16.563227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.420 [2024-12-16 22:14:16.563232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.420 [2024-12-16 22:14:16.563719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:27.420 [2024-12-16 22:14:16.859808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.420 ************************************ 00:08:27.420 START TEST lvs_grow_clean 00:08:27.420 ************************************ 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.420 22:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.679 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:27.679 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.679 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:27.679 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:27.679 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.938 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.938 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.939 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 lvol 150 00:08:28.198 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=727b4eb7-f6bf-418c-bb1a-5f0de641e13d 00:08:28.199 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.199 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:28.199 [2024-12-16 22:14:17.876117] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:28.199 [2024-12-16 22:14:17.876168] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:28.199 true 00:08:28.199 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:28.199 22:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.458 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.458 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.717 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 727b4eb7-f6bf-418c-bb1a-5f0de641e13d 00:08:28.976 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.976 [2024-12-16 22:14:18.622313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.976 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152446 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152446 /var/tmp/bdevperf.sock 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152446 ']' 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.236 22:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.236 [2024-12-16 22:14:18.881982] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:29.236 [2024-12-16 22:14:18.882029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152446 ] 00:08:29.496 [2024-12-16 22:14:18.956109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.496 [2024-12-16 22:14:18.978518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.496 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.496 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:29.496 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.755 Nvme0n1 00:08:29.755 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.013 [ 00:08:30.013 { 00:08:30.013 "name": "Nvme0n1", 00:08:30.013 "aliases": [ 00:08:30.013 "727b4eb7-f6bf-418c-bb1a-5f0de641e13d" 00:08:30.013 ], 00:08:30.013 "product_name": "NVMe disk", 00:08:30.013 "block_size": 4096, 00:08:30.013 "num_blocks": 38912, 00:08:30.013 "uuid": "727b4eb7-f6bf-418c-bb1a-5f0de641e13d", 00:08:30.013 "numa_id": 1, 00:08:30.013 "assigned_rate_limits": { 00:08:30.013 "rw_ios_per_sec": 0, 00:08:30.013 "rw_mbytes_per_sec": 0, 00:08:30.013 "r_mbytes_per_sec": 0, 00:08:30.013 "w_mbytes_per_sec": 0 00:08:30.013 }, 00:08:30.013 "claimed": false, 00:08:30.013 "zoned": false, 00:08:30.013 "supported_io_types": { 00:08:30.013 "read": true, 00:08:30.013 "write": true, 00:08:30.013 "unmap": true, 00:08:30.013 "flush": true, 00:08:30.013 "reset": true, 00:08:30.013 "nvme_admin": true, 00:08:30.013 "nvme_io": true, 00:08:30.013 "nvme_io_md": false, 00:08:30.013 "write_zeroes": true, 00:08:30.013 "zcopy": false, 00:08:30.013 "get_zone_info": false, 00:08:30.013 "zone_management": false, 00:08:30.013 "zone_append": false, 00:08:30.013 "compare": true, 00:08:30.013 "compare_and_write": true, 00:08:30.013 "abort": true, 00:08:30.013 "seek_hole": false, 00:08:30.013 "seek_data": false, 00:08:30.013 "copy": true, 00:08:30.013 "nvme_iov_md": false 00:08:30.013 }, 00:08:30.013 "memory_domains": [ 00:08:30.013 { 00:08:30.013 "dma_device_id": "system", 00:08:30.014 "dma_device_type": 1 00:08:30.014 } 00:08:30.014 ], 00:08:30.014 "driver_specific": { 00:08:30.014 "nvme": [ 00:08:30.014 { 00:08:30.014 "trid": { 00:08:30.014 "trtype": "TCP", 00:08:30.014 "adrfam": "IPv4", 00:08:30.014 "traddr": "10.0.0.2", 00:08:30.014 "trsvcid": "4420", 00:08:30.014 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.014 }, 00:08:30.014 "ctrlr_data": { 00:08:30.014 "cntlid": 1, 00:08:30.014 "vendor_id": "0x8086", 00:08:30.014 "model_number": "SPDK bdev Controller", 00:08:30.014 "serial_number": "SPDK0", 00:08:30.014 "firmware_revision": "25.01", 00:08:30.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.014 "oacs": { 00:08:30.014 "security": 0, 00:08:30.014 "format": 0, 00:08:30.014 "firmware": 0, 00:08:30.014 "ns_manage": 0 00:08:30.014 }, 00:08:30.014 "multi_ctrlr": true, 00:08:30.014 "ana_reporting": false 00:08:30.014 }, 00:08:30.014 "vs": { 00:08:30.014 "nvme_version": "1.3" 00:08:30.014 }, 00:08:30.014 "ns_data": { 00:08:30.014 "id": 1, 00:08:30.014 "can_share": true 00:08:30.014 } 00:08:30.014 } 00:08:30.014 ], 00:08:30.014 "mp_policy": "active_passive" 00:08:30.014 } 00:08:30.014 } 00:08:30.014 ] 00:08:30.014 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=152663 00:08:30.014 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.014 22:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.014 Running I/O for 10 seconds... 00:08:31.399 Latency(us) 00:08:31.399 [2024-12-16T21:14:21.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.399 Nvme0n1 : 1.00 22825.00 89.16 0.00 0.00 0.00 0.00 0.00 00:08:31.399 [2024-12-16T21:14:21.100Z] =================================================================================================================== 00:08:31.399 [2024-12-16T21:14:21.100Z] Total : 22825.00 89.16 0.00 0.00 0.00 0.00 0.00 00:08:31.399 00:08:31.968 22:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:32.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.227 Nvme0n1 : 2.00 23230.00 90.74 0.00 0.00 0.00 0.00 0.00 00:08:32.227 [2024-12-16T21:14:21.928Z] =================================================================================================================== 00:08:32.227 [2024-12-16T21:14:21.928Z] Total : 23230.00 90.74 0.00 0.00 0.00 0.00 0.00 00:08:32.227 00:08:32.227 true 00:08:32.227 22:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:32.227 22:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.486 22:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.486 22:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.486 22:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 152663 00:08:33.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.054 Nvme0n1 : 3.00 23373.00 91.30 0.00 0.00 0.00 0.00 0.00 00:08:33.054 [2024-12-16T21:14:22.755Z] =================================================================================================================== 00:08:33.054 [2024-12-16T21:14:22.755Z] Total : 23373.00 91.30 0.00 0.00 0.00 0.00 0.00 00:08:33.054 00:08:34.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.434 Nvme0n1 : 4.00 23466.75 91.67 0.00 0.00 0.00 0.00 0.00 00:08:34.434 [2024-12-16T21:14:24.135Z] =================================================================================================================== 00:08:34.434 [2024-12-16T21:14:24.135Z] Total : 23466.75 91.67 0.00 0.00 0.00 0.00 0.00 00:08:34.434 00:08:35.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.002 Nvme0n1 : 5.00 23562.00 92.04 0.00 0.00 0.00 0.00 0.00 00:08:35.002 [2024-12-16T21:14:24.703Z] =================================================================================================================== 00:08:35.002 [2024-12-16T21:14:24.703Z] Total : 23562.00 92.04 0.00 0.00 0.00 0.00 0.00 00:08:35.002 00:08:36.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.382 Nvme0n1 : 6.00 23612.00 92.23 0.00 0.00 0.00 0.00 0.00 00:08:36.382 [2024-12-16T21:14:26.083Z] =================================================================================================================== 00:08:36.382 [2024-12-16T21:14:26.083Z] Total : 23612.00 92.23 0.00 0.00 0.00 0.00 0.00 00:08:36.382 00:08:37.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.320 Nvme0n1 : 7.00 23650.29 92.38 0.00 0.00 0.00 0.00 0.00 00:08:37.320 [2024-12-16T21:14:27.021Z] =================================================================================================================== 00:08:37.320 [2024-12-16T21:14:27.021Z] Total : 23650.29 92.38 0.00 0.00 0.00 0.00 0.00 00:08:37.320 00:08:38.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.259 Nvme0n1 : 8.00 23689.62 92.54 0.00 0.00 0.00 0.00 0.00 00:08:38.259 [2024-12-16T21:14:27.960Z] =================================================================================================================== 00:08:38.259 [2024-12-16T21:14:27.960Z] Total : 23689.62 92.54 0.00 0.00 0.00 0.00 0.00 00:08:38.259 00:08:39.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.197 Nvme0n1 : 9.00 23718.56 92.65 0.00 0.00 0.00 0.00 0.00 00:08:39.197 [2024-12-16T21:14:28.899Z] =================================================================================================================== 00:08:39.198 [2024-12-16T21:14:28.899Z] Total : 23718.56 92.65 0.00 0.00 0.00 0.00 0.00 00:08:39.198 00:08:40.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.136 Nvme0n1 : 10.00 23740.80 92.74 0.00 0.00 0.00 0.00 0.00 00:08:40.136 [2024-12-16T21:14:29.837Z] =================================================================================================================== 00:08:40.136 [2024-12-16T21:14:29.837Z] Total : 23740.80 92.74 0.00 0.00 0.00 0.00 0.00 00:08:40.136 00:08:40.136 00:08:40.136 Latency(us) 00:08:40.136 [2024-12-16T21:14:29.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.136 Nvme0n1 : 10.00 23746.68 92.76 0.00 0.00 5387.13 3167.57 15666.22 00:08:40.136 [2024-12-16T21:14:29.837Z] =================================================================================================================== 00:08:40.136 [2024-12-16T21:14:29.838Z] Total : 23746.68 92.76 0.00 0.00 5387.13 3167.57 15666.22 00:08:40.137 { 00:08:40.137 "results": [ 00:08:40.137 { 00:08:40.137 "job": "Nvme0n1", 00:08:40.137 "core_mask": "0x2", 00:08:40.137 "workload": "randwrite", 00:08:40.137 "status": "finished", 00:08:40.137 "queue_depth": 128, 00:08:40.137 "io_size": 4096, 00:08:40.137 "runtime": 10.002916, 00:08:40.137 "iops": 23746.675469433114, 00:08:40.137 "mibps": 92.7604510524731, 00:08:40.137 "io_failed": 0, 00:08:40.137 "io_timeout": 0, 00:08:40.137 "avg_latency_us": 5387.129413726962, 00:08:40.137 "min_latency_us": 3167.5733333333333, 00:08:40.137 "max_latency_us": 15666.224761904761 00:08:40.137 } 00:08:40.137 ], 00:08:40.137 "core_count": 1 00:08:40.137 } 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152446 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152446 ']' 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152446 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152446 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152446' 00:08:40.137 killing process with pid 152446 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152446 00:08:40.137 Received shutdown signal, test time was about 10.000000 seconds 00:08:40.137 00:08:40.137 Latency(us) 00:08:40.137 [2024-12-16T21:14:29.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.137 [2024-12-16T21:14:29.838Z] =================================================================================================================== 00:08:40.137 [2024-12-16T21:14:29.838Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.137 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152446 00:08:40.396 22:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.656 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.915 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:40.915 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.915 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.915 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:40.915 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.174 [2024-12-16 22:14:30.734902] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.174 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:41.433 request: 00:08:41.433 { 00:08:41.433 "uuid": "629f0dd1-a22a-4f5f-8502-6746ade82c24", 00:08:41.433 "method": "bdev_lvol_get_lvstores", 00:08:41.433 "req_id": 1 00:08:41.433 } 00:08:41.433 Got JSON-RPC error response 00:08:41.433 response: 00:08:41.433 { 00:08:41.433 "code": -19, 00:08:41.433 "message": "No such device" 00:08:41.433 } 00:08:41.433 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:41.433 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.433 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.433 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.433 22:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.693 aio_bdev 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 727b4eb7-f6bf-418c-bb1a-5f0de641e13d 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=727b4eb7-f6bf-418c-bb1a-5f0de641e13d 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.693 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 727b4eb7-f6bf-418c-bb1a-5f0de641e13d -t 2000 00:08:41.953 [ 00:08:41.953 { 00:08:41.953 "name": "727b4eb7-f6bf-418c-bb1a-5f0de641e13d", 00:08:41.953 "aliases": [ 00:08:41.953 "lvs/lvol" 00:08:41.953 ], 00:08:41.953 "product_name": "Logical Volume", 00:08:41.953 "block_size": 4096, 00:08:41.953 "num_blocks": 38912, 00:08:41.953 "uuid": "727b4eb7-f6bf-418c-bb1a-5f0de641e13d", 00:08:41.953 "assigned_rate_limits": { 00:08:41.953 "rw_ios_per_sec": 0, 00:08:41.953 "rw_mbytes_per_sec": 0, 00:08:41.953 "r_mbytes_per_sec": 0, 00:08:41.953 "w_mbytes_per_sec": 0 00:08:41.953 }, 00:08:41.953 "claimed": false, 00:08:41.953 "zoned": false, 00:08:41.953 "supported_io_types": { 00:08:41.953 "read": true, 00:08:41.953 "write": true, 00:08:41.953 "unmap": true, 00:08:41.953 "flush": false, 00:08:41.953 "reset": true, 00:08:41.953 "nvme_admin": false, 00:08:41.953 "nvme_io": false, 00:08:41.953 "nvme_io_md": false, 00:08:41.953 "write_zeroes": true, 00:08:41.953 "zcopy": false, 00:08:41.953 "get_zone_info": false, 00:08:41.953 "zone_management": false, 00:08:41.953 "zone_append": false, 00:08:41.953 "compare": false, 00:08:41.953 "compare_and_write": false, 00:08:41.953 "abort": false, 00:08:41.953 "seek_hole": true, 00:08:41.953 "seek_data": true, 00:08:41.953 "copy": false, 00:08:41.953 "nvme_iov_md": false 00:08:41.953 }, 00:08:41.953 "driver_specific": { 00:08:41.953 "lvol": { 00:08:41.953 "lvol_store_uuid": "629f0dd1-a22a-4f5f-8502-6746ade82c24", 00:08:41.953 "base_bdev": "aio_bdev", 00:08:41.953 "thin_provision": false, 00:08:41.953 "num_allocated_clusters": 38, 00:08:41.953 "snapshot": false, 00:08:41.953 "clone": false, 00:08:41.953 "esnap_clone": false 00:08:41.953 } 00:08:41.953 } 00:08:41.953 } 00:08:41.953 ] 00:08:41.953 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:41.953 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:41.953 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.212 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.212 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:42.212 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:42.472 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:42.472 22:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 727b4eb7-f6bf-418c-bb1a-5f0de641e13d 00:08:42.472 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 629f0dd1-a22a-4f5f-8502-6746ade82c24 00:08:42.731 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.990 00:08:42.990 real 0m15.647s 00:08:42.990 user 0m15.199s 00:08:42.990 sys 0m1.488s 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.990 ************************************ 00:08:42.990 END TEST lvs_grow_clean 00:08:42.990 ************************************ 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:42.990 ************************************ 00:08:42.990 START TEST lvs_grow_dirty 00:08:42.990 ************************************ 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:42.990 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.250 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:43.250 22:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:43.509 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=55514a81-02ac-48f2-bb95-d600d0213715 00:08:43.509 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:43.509 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:43.769 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:43.769 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:43.769 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55514a81-02ac-48f2-bb95-d600d0213715 lvol 150 00:08:44.029 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:44.029 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.029 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:44.029 [2024-12-16 22:14:33.636120] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:44.029 [2024-12-16 22:14:33.636169] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:44.029 true 00:08:44.029 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:44.029 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:44.289 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:44.289 22:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.548 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:44.548 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:44.807 [2024-12-16 22:14:34.358258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.807 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=155186 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 155186 /var/tmp/bdevperf.sock 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 155186 ']' 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.066 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.066 [2024-12-16 22:14:34.589734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:45.066 [2024-12-16 22:14:34.589779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155186 ] 00:08:45.066 [2024-12-16 22:14:34.664731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.066 [2024-12-16 22:14:34.687449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.325 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.325 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:45.325 22:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.592 Nvme0n1 00:08:45.592 22:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:45.592 [ 00:08:45.592 { 00:08:45.592 "name": "Nvme0n1", 00:08:45.592 "aliases": [ 00:08:45.592 "1cb8ec0b-3a31-443a-aa74-991658ac0c2a" 00:08:45.592 ], 00:08:45.592 "product_name": "NVMe disk", 00:08:45.592 "block_size": 4096, 00:08:45.592 "num_blocks": 38912, 00:08:45.592 "uuid": "1cb8ec0b-3a31-443a-aa74-991658ac0c2a", 00:08:45.592 "numa_id": 1, 00:08:45.592 "assigned_rate_limits": { 00:08:45.592 "rw_ios_per_sec": 0, 00:08:45.592 "rw_mbytes_per_sec": 0, 00:08:45.592 "r_mbytes_per_sec": 0, 00:08:45.592 "w_mbytes_per_sec": 0 00:08:45.592 }, 00:08:45.592 "claimed": false, 00:08:45.592 "zoned": false, 00:08:45.592 "supported_io_types": { 00:08:45.592 "read": true, 00:08:45.592 "write": true, 00:08:45.592 "unmap": true, 00:08:45.592 "flush": true, 00:08:45.592 "reset": true, 00:08:45.592 "nvme_admin": true, 00:08:45.592 "nvme_io": true, 00:08:45.592 "nvme_io_md": false, 00:08:45.592 "write_zeroes": true, 00:08:45.592 "zcopy": false, 00:08:45.592 "get_zone_info": false, 00:08:45.592 "zone_management": false, 00:08:45.592 "zone_append": false, 00:08:45.592 "compare": true, 00:08:45.592 "compare_and_write": true, 00:08:45.592 "abort": true, 00:08:45.592 "seek_hole": false, 00:08:45.592 "seek_data": false, 00:08:45.592 "copy": true, 00:08:45.592 "nvme_iov_md": false 00:08:45.592 }, 00:08:45.592 "memory_domains": [ 00:08:45.592 { 00:08:45.592 "dma_device_id": "system", 00:08:45.592 "dma_device_type": 1 00:08:45.592 } 00:08:45.592 ], 00:08:45.592 "driver_specific": { 00:08:45.592 "nvme": [ 00:08:45.592 { 00:08:45.592 "trid": { 00:08:45.592 "trtype": "TCP", 00:08:45.592 "adrfam": "IPv4", 00:08:45.592 "traddr": "10.0.0.2", 00:08:45.592 "trsvcid": "4420", 00:08:45.592 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:45.592 }, 00:08:45.592 "ctrlr_data": { 00:08:45.592 "cntlid": 1, 00:08:45.592 "vendor_id": "0x8086", 00:08:45.592 "model_number": "SPDK bdev Controller", 00:08:45.592 "serial_number": "SPDK0", 00:08:45.592 "firmware_revision": "25.01", 00:08:45.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.592 "oacs": { 00:08:45.592 "security": 0, 00:08:45.592 "format": 0, 00:08:45.592 "firmware": 0, 00:08:45.592 "ns_manage": 0 00:08:45.592 }, 00:08:45.592 "multi_ctrlr": true, 00:08:45.592 "ana_reporting": false 00:08:45.592 }, 00:08:45.592 "vs": { 00:08:45.592 "nvme_version": "1.3" 00:08:45.592 }, 00:08:45.592 "ns_data": { 00:08:45.592 "id": 1, 00:08:45.592 "can_share": true 00:08:45.592 } 00:08:45.592 } 00:08:45.592 ], 00:08:45.592 "mp_policy": "active_passive" 00:08:45.592 } 00:08:45.592 } 00:08:45.592 ] 00:08:45.592 22:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=155204 00:08:45.592 22:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:45.592 22:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.858 Running I/O for 10 seconds... 00:08:46.795 Latency(us) 00:08:46.795 [2024-12-16T21:14:36.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.795 Nvme0n1 : 1.00 23517.00 91.86 0.00 0.00 0.00 0.00 0.00 00:08:46.795 [2024-12-16T21:14:36.496Z] =================================================================================================================== 00:08:46.795 [2024-12-16T21:14:36.496Z] Total : 23517.00 91.86 0.00 0.00 0.00 0.00 0.00 00:08:46.795 00:08:47.733 22:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:47.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.734 Nvme0n1 : 2.00 23604.00 92.20 0.00 0.00 0.00 0.00 0.00 00:08:47.734 [2024-12-16T21:14:37.435Z] =================================================================================================================== 00:08:47.734 [2024-12-16T21:14:37.435Z] Total : 23604.00 92.20 0.00 0.00 0.00 0.00 0.00 00:08:47.734 00:08:47.993 true 00:08:47.993 22:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:47.993 22:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:47.993 22:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:47.993 22:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:47.993 22:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 155204 00:08:48.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.932 Nvme0n1 : 3.00 23677.67 92.49 0.00 0.00 0.00 0.00 0.00 00:08:48.932 [2024-12-16T21:14:38.633Z] =================================================================================================================== 00:08:48.932 [2024-12-16T21:14:38.633Z] Total : 23677.67 92.49 0.00 0.00 0.00 0.00 0.00 00:08:48.932 00:08:49.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.870 Nvme0n1 : 4.00 23732.00 92.70 0.00 0.00 0.00 0.00 0.00 00:08:49.870 [2024-12-16T21:14:39.571Z] =================================================================================================================== 00:08:49.870 [2024-12-16T21:14:39.571Z] Total : 23732.00 92.70 0.00 0.00 0.00 0.00 0.00 00:08:49.870 00:08:50.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.808 Nvme0n1 : 5.00 23738.60 92.73 0.00 0.00 0.00 0.00 0.00 00:08:50.808 [2024-12-16T21:14:40.509Z] =================================================================================================================== 00:08:50.808 [2024-12-16T21:14:40.509Z] Total : 23738.60 92.73 0.00 0.00 0.00 0.00 0.00 00:08:50.808 00:08:51.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.747 Nvme0n1 : 6.00 23732.50 92.71 0.00 0.00 0.00 0.00 0.00 00:08:51.747 [2024-12-16T21:14:41.448Z] =================================================================================================================== 00:08:51.747 [2024-12-16T21:14:41.448Z] Total : 23732.50 92.71 0.00 0.00 0.00 0.00 0.00 00:08:51.747 00:08:52.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.686 Nvme0n1 : 7.00 23766.00 92.84 0.00 0.00 0.00 0.00 0.00 00:08:52.686 [2024-12-16T21:14:42.387Z] =================================================================================================================== 00:08:52.686 [2024-12-16T21:14:42.387Z] Total : 23766.00 92.84 0.00 0.00 0.00 0.00 0.00 00:08:52.686 00:08:54.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.066 Nvme0n1 : 8.00 23790.00 92.93 0.00 0.00 0.00 0.00 0.00 00:08:54.066 [2024-12-16T21:14:43.767Z] =================================================================================================================== 00:08:54.066 [2024-12-16T21:14:43.767Z] Total : 23790.00 92.93 0.00 0.00 0.00 0.00 0.00 00:08:54.066 00:08:55.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.003 Nvme0n1 : 9.00 23814.78 93.03 0.00 0.00 0.00 0.00 0.00 00:08:55.003 [2024-12-16T21:14:44.704Z] =================================================================================================================== 00:08:55.003 [2024-12-16T21:14:44.704Z] Total : 23814.78 93.03 0.00 0.00 0.00 0.00 0.00 00:08:55.003 00:08:55.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.945 Nvme0n1 : 10.00 23815.40 93.03 0.00 0.00 0.00 0.00 0.00 00:08:55.945 [2024-12-16T21:14:45.646Z] =================================================================================================================== 00:08:55.945 [2024-12-16T21:14:45.646Z] Total : 23815.40 93.03 0.00 0.00 0.00 0.00 0.00 00:08:55.945 00:08:55.945 00:08:55.945 Latency(us) 00:08:55.945 [2024-12-16T21:14:45.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.945 Nvme0n1 : 10.00 23817.31 93.04 0.00 0.00 5371.33 3011.54 9799.19 00:08:55.945 [2024-12-16T21:14:45.646Z] =================================================================================================================== 00:08:55.945 [2024-12-16T21:14:45.646Z] Total : 23817.31 93.04 0.00 0.00 5371.33 3011.54 9799.19 00:08:55.945 { 00:08:55.945 "results": [ 00:08:55.945 { 00:08:55.945 "job": "Nvme0n1", 00:08:55.945 "core_mask": "0x2", 00:08:55.945 "workload": "randwrite", 00:08:55.945 "status": "finished", 00:08:55.945 "queue_depth": 128, 00:08:55.945 "io_size": 4096, 00:08:55.945 "runtime": 10.004573, 00:08:55.945 "iops": 23817.30834489388, 00:08:55.945 "mibps": 93.03636072224172, 00:08:55.945 "io_failed": 0, 00:08:55.945 "io_timeout": 0, 00:08:55.945 "avg_latency_us": 5371.330204811346, 00:08:55.945 "min_latency_us": 3011.535238095238, 00:08:55.945 "max_latency_us": 9799.192380952381 00:08:55.945 } 00:08:55.945 ], 00:08:55.945 "core_count": 1 00:08:55.945 } 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 155186 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 155186 ']' 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 155186 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 155186 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 155186' 00:08:55.945 killing process with pid 155186 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 155186 00:08:55.945 Received shutdown signal, test time was about 10.000000 seconds 00:08:55.945 00:08:55.945 Latency(us) 00:08:55.945 [2024-12-16T21:14:45.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.945 [2024-12-16T21:14:45.646Z] =================================================================================================================== 00:08:55.945 [2024-12-16T21:14:45.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 155186 00:08:55.945 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.204 22:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.463 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:56.463 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 152029 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 152029 00:08:56.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 152029 Killed "${NVMF_APP[@]}" "$@" 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=157007 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 157007 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 157007 ']' 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.723 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.723 [2024-12-16 22:14:46.276995] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:56.723 [2024-12-16 22:14:46.277044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.723 [2024-12-16 22:14:46.354465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.723 [2024-12-16 22:14:46.375078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.723 [2024-12-16 22:14:46.375110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.723 [2024-12-16 22:14:46.375117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.723 [2024-12-16 22:14:46.375124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.723 [2024-12-16 22:14:46.375128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.723 [2024-12-16 22:14:46.375640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.982 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.242 [2024-12-16 22:14:46.688532] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:57.242 [2024-12-16 22:14:46.688610] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:57.242 [2024-12-16 22:14:46.688633] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.242 22:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cb8ec0b-3a31-443a-aa74-991658ac0c2a -t 2000 00:08:57.502 [ 00:08:57.502 { 00:08:57.502 "name": "1cb8ec0b-3a31-443a-aa74-991658ac0c2a", 00:08:57.502 "aliases": [ 00:08:57.502 "lvs/lvol" 00:08:57.502 ], 00:08:57.502 "product_name": "Logical Volume", 00:08:57.502 "block_size": 4096, 00:08:57.502 "num_blocks": 38912, 00:08:57.502 "uuid": "1cb8ec0b-3a31-443a-aa74-991658ac0c2a", 00:08:57.502 "assigned_rate_limits": { 00:08:57.502 "rw_ios_per_sec": 0, 00:08:57.502 "rw_mbytes_per_sec": 0, 00:08:57.502 "r_mbytes_per_sec": 0, 00:08:57.502 "w_mbytes_per_sec": 0 00:08:57.502 }, 00:08:57.502 "claimed": false, 00:08:57.502 "zoned": false, 00:08:57.502 "supported_io_types": { 00:08:57.502 "read": true, 00:08:57.502 "write": true, 00:08:57.502 "unmap": true, 00:08:57.502 "flush": false, 00:08:57.502 "reset": true, 00:08:57.502 "nvme_admin": false, 00:08:57.502 "nvme_io": false, 00:08:57.502 "nvme_io_md": false, 00:08:57.502 "write_zeroes": true, 00:08:57.502 "zcopy": false, 00:08:57.502 "get_zone_info": false, 00:08:57.502 "zone_management": false, 00:08:57.502 "zone_append": false, 00:08:57.502 "compare": false, 00:08:57.502 "compare_and_write": false, 00:08:57.502 "abort": false, 00:08:57.502 "seek_hole": true, 00:08:57.502 "seek_data": true, 00:08:57.502 "copy": false, 00:08:57.502 "nvme_iov_md": false 00:08:57.502 }, 00:08:57.502 "driver_specific": { 00:08:57.502 "lvol": { 00:08:57.502 "lvol_store_uuid": "55514a81-02ac-48f2-bb95-d600d0213715", 00:08:57.502 "base_bdev": "aio_bdev", 00:08:57.502 "thin_provision": false, 00:08:57.502 "num_allocated_clusters": 38, 00:08:57.502 "snapshot": false, 00:08:57.502 "clone": false, 00:08:57.502 "esnap_clone": false 00:08:57.502 } 00:08:57.502 } 00:08:57.502 } 00:08:57.502 ] 00:08:57.502 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:57.502 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:57.502 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:57.761 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:57.761 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:57.761 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:58.020 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:58.020 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.020 [2024-12-16 22:14:47.641493] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:58.020 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:58.021 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:58.280 request: 00:08:58.280 { 00:08:58.280 "uuid": "55514a81-02ac-48f2-bb95-d600d0213715", 00:08:58.280 "method": "bdev_lvol_get_lvstores", 00:08:58.280 "req_id": 1 00:08:58.280 } 00:08:58.280 Got JSON-RPC error response 00:08:58.280 response: 00:08:58.280 { 00:08:58.280 "code": -19, 00:08:58.280 "message": "No such device" 00:08:58.280 } 00:08:58.280 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:58.280 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.280 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.280 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.280 22:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.540 aio_bdev 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.540 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:58.800 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cb8ec0b-3a31-443a-aa74-991658ac0c2a -t 2000 00:08:58.800 [ 00:08:58.800 { 00:08:58.800 "name": "1cb8ec0b-3a31-443a-aa74-991658ac0c2a", 00:08:58.800 "aliases": [ 00:08:58.800 "lvs/lvol" 00:08:58.800 ], 00:08:58.800 "product_name": "Logical Volume", 00:08:58.800 "block_size": 4096, 00:08:58.800 "num_blocks": 38912, 00:08:58.800 "uuid": "1cb8ec0b-3a31-443a-aa74-991658ac0c2a", 00:08:58.800 "assigned_rate_limits": { 00:08:58.800 "rw_ios_per_sec": 0, 00:08:58.800 "rw_mbytes_per_sec": 0, 00:08:58.800 "r_mbytes_per_sec": 0, 00:08:58.800 "w_mbytes_per_sec": 0 00:08:58.800 }, 00:08:58.800 "claimed": false, 00:08:58.800 "zoned": false, 00:08:58.800 "supported_io_types": { 00:08:58.800 "read": true, 00:08:58.800 "write": true, 00:08:58.800 "unmap": true, 00:08:58.800 "flush": false, 00:08:58.800 "reset": true, 00:08:58.800 "nvme_admin": false, 00:08:58.800 "nvme_io": false, 00:08:58.800 "nvme_io_md": false, 00:08:58.800 "write_zeroes": true, 00:08:58.800 "zcopy": false, 00:08:58.800 "get_zone_info": false, 00:08:58.800 "zone_management": false, 00:08:58.800 "zone_append": false, 00:08:58.800 "compare": false, 00:08:58.800 "compare_and_write": false, 00:08:58.800 "abort": false, 00:08:58.800 "seek_hole": true, 00:08:58.800 "seek_data": true, 00:08:58.800 "copy": false, 00:08:58.800 "nvme_iov_md": false 00:08:58.800 }, 00:08:58.800 "driver_specific": { 00:08:58.800 "lvol": { 00:08:58.800 "lvol_store_uuid": "55514a81-02ac-48f2-bb95-d600d0213715", 00:08:58.800 "base_bdev": "aio_bdev", 00:08:58.800 "thin_provision": false, 00:08:58.800 "num_allocated_clusters": 38, 00:08:58.800 "snapshot": false, 00:08:58.800 "clone": false, 00:08:58.800 "esnap_clone": false 00:08:58.800 } 00:08:58.800 } 00:08:58.800 } 00:08:58.800 ] 00:08:58.800 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:58.800 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:58.800 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:59.059 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:59.059 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:59.059 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:59.319 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:59.319 22:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1cb8ec0b-3a31-443a-aa74-991658ac0c2a 00:08:59.319 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55514a81-02ac-48f2-bb95-d600d0213715 00:08:59.578 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.837 00:08:59.837 real 0m16.799s 00:08:59.837 user 0m43.565s 00:08:59.837 sys 0m3.740s 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.837 ************************************ 00:08:59.837 END TEST lvs_grow_dirty 00:08:59.837 ************************************ 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:59.837 nvmf_trace.0 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:59.837 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:59.838 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.838 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:59.838 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.838 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:59.838 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.838 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.838 rmmod nvme_tcp 00:09:00.097 rmmod nvme_fabrics 00:09:00.097 rmmod nvme_keyring 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 157007 ']' 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 157007 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 157007 ']' 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 157007 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157007 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157007' 00:09:00.097 killing process with pid 157007 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 157007 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 157007 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.097 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.365 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.365 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:00.365 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.365 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.365 22:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:02.273 00:09:02.273 real 0m41.604s 00:09:02.273 user 1m4.337s 00:09:02.273 sys 0m10.113s 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:02.273 ************************************ 00:09:02.273 END TEST nvmf_lvs_grow 00:09:02.273 ************************************ 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.273 ************************************ 00:09:02.273 START TEST nvmf_bdev_io_wait 00:09:02.273 ************************************ 00:09:02.273 22:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:02.533 * Looking for test storage... 00:09:02.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:02.533 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.534 --rc genhtml_legend=1 00:09:02.534 --rc geninfo_all_blocks=1 00:09:02.534 --rc geninfo_unexecuted_blocks=1 00:09:02.534 00:09:02.534 ' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.534 --rc genhtml_legend=1 00:09:02.534 --rc geninfo_all_blocks=1 00:09:02.534 --rc geninfo_unexecuted_blocks=1 00:09:02.534 00:09:02.534 ' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.534 --rc genhtml_legend=1 00:09:02.534 --rc geninfo_all_blocks=1 00:09:02.534 --rc geninfo_unexecuted_blocks=1 00:09:02.534 00:09:02.534 ' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.534 --rc genhtml_branch_coverage=1 00:09:02.534 --rc genhtml_function_coverage=1 00:09:02.534 --rc genhtml_legend=1 00:09:02.534 --rc geninfo_all_blocks=1 00:09:02.534 --rc geninfo_unexecuted_blocks=1 00:09:02.534 00:09:02.534 ' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.534 22:14:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:09.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:09.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.112 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:09.113 Found net devices under 0000:af:00.0: cvl_0_0 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:09.113 Found net devices under 0000:af:00.1: cvl_0_1 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.113 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:09:09.113 00:09:09.113 --- 10.0.0.2 ping statistics --- 00:09:09.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.113 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:09:09.113 00:09:09.113 --- 10.0.0.1 ping statistics --- 00:09:09.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.113 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=161200 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 161200 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 161200 ']' 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.113 [2024-12-16 22:14:58.156566] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:09.113 [2024-12-16 22:14:58.156609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.113 [2024-12-16 22:14:58.234354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.113 [2024-12-16 22:14:58.257754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.113 [2024-12-16 22:14:58.257791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.113 [2024-12-16 22:14:58.257798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.113 [2024-12-16 22:14:58.257803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.113 [2024-12-16 22:14:58.257809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.113 [2024-12-16 22:14:58.259107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.113 [2024-12-16 22:14:58.259235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.113 [2024-12-16 22:14:58.259280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.113 [2024-12-16 22:14:58.259281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.113 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 [2024-12-16 22:14:58.423387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 Malloc0 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.114 [2024-12-16 22:14:58.470342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=161232 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=161234 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.114 { 00:09:09.114 "params": { 00:09:09.114 "name": "Nvme$subsystem", 00:09:09.114 "trtype": "$TEST_TRANSPORT", 00:09:09.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.114 "adrfam": "ipv4", 00:09:09.114 "trsvcid": "$NVMF_PORT", 00:09:09.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.114 "hdgst": ${hdgst:-false}, 00:09:09.114 "ddgst": ${ddgst:-false} 00:09:09.114 }, 00:09:09.114 "method": "bdev_nvme_attach_controller" 00:09:09.114 } 00:09:09.114 EOF 00:09:09.114 )") 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=161236 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.114 { 00:09:09.114 "params": { 00:09:09.114 "name": "Nvme$subsystem", 00:09:09.114 "trtype": "$TEST_TRANSPORT", 00:09:09.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.114 "adrfam": "ipv4", 00:09:09.114 "trsvcid": "$NVMF_PORT", 00:09:09.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.114 "hdgst": ${hdgst:-false}, 00:09:09.114 "ddgst": ${ddgst:-false} 00:09:09.114 }, 00:09:09.114 "method": "bdev_nvme_attach_controller" 00:09:09.114 } 00:09:09.114 EOF 00:09:09.114 )") 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=161239 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.114 { 00:09:09.114 "params": { 00:09:09.114 "name": "Nvme$subsystem", 00:09:09.114 "trtype": "$TEST_TRANSPORT", 00:09:09.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.114 "adrfam": "ipv4", 00:09:09.114 "trsvcid": "$NVMF_PORT", 00:09:09.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.114 "hdgst": ${hdgst:-false}, 00:09:09.114 "ddgst": ${ddgst:-false} 00:09:09.114 }, 00:09:09.114 "method": "bdev_nvme_attach_controller" 00:09:09.114 } 00:09:09.114 EOF 00:09:09.114 )") 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.114 { 00:09:09.114 "params": { 00:09:09.114 "name": "Nvme$subsystem", 00:09:09.114 "trtype": "$TEST_TRANSPORT", 00:09:09.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.114 "adrfam": "ipv4", 00:09:09.114 "trsvcid": "$NVMF_PORT", 00:09:09.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.114 "hdgst": ${hdgst:-false}, 00:09:09.114 "ddgst": ${ddgst:-false} 00:09:09.114 }, 00:09:09.114 "method": "bdev_nvme_attach_controller" 00:09:09.114 } 00:09:09.114 EOF 00:09:09.114 )") 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 161232 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.114 "params": { 00:09:09.114 "name": "Nvme1", 00:09:09.114 "trtype": "tcp", 00:09:09.114 "traddr": "10.0.0.2", 00:09:09.114 "adrfam": "ipv4", 00:09:09.114 "trsvcid": "4420", 00:09:09.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.114 "hdgst": false, 00:09:09.114 "ddgst": false 00:09:09.114 }, 00:09:09.114 "method": "bdev_nvme_attach_controller" 00:09:09.114 }' 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.114 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.114 "params": { 00:09:09.114 "name": "Nvme1", 00:09:09.114 "trtype": "tcp", 00:09:09.114 "traddr": "10.0.0.2", 00:09:09.114 "adrfam": "ipv4", 00:09:09.114 "trsvcid": "4420", 00:09:09.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.115 "hdgst": false, 00:09:09.115 "ddgst": false 00:09:09.115 }, 00:09:09.115 "method": "bdev_nvme_attach_controller" 00:09:09.115 }' 00:09:09.115 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.115 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.115 "params": { 00:09:09.115 "name": "Nvme1", 00:09:09.115 "trtype": "tcp", 00:09:09.115 "traddr": "10.0.0.2", 00:09:09.115 "adrfam": "ipv4", 00:09:09.115 "trsvcid": "4420", 00:09:09.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.115 "hdgst": false, 00:09:09.115 "ddgst": false 00:09:09.115 }, 00:09:09.115 "method": "bdev_nvme_attach_controller" 00:09:09.115 }' 00:09:09.115 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.115 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.115 "params": { 00:09:09.115 "name": "Nvme1", 00:09:09.115 "trtype": "tcp", 00:09:09.115 "traddr": "10.0.0.2", 00:09:09.115 "adrfam": "ipv4", 00:09:09.115 "trsvcid": "4420", 00:09:09.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.115 "hdgst": false, 00:09:09.115 "ddgst": false 00:09:09.115 }, 00:09:09.115 "method": "bdev_nvme_attach_controller" 00:09:09.115 }' 00:09:09.115 [2024-12-16 22:14:58.522786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:09.115 [2024-12-16 22:14:58.522832] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:09.115 [2024-12-16 22:14:58.524027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:09.115 [2024-12-16 22:14:58.524072] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:09.115 [2024-12-16 22:14:58.524801] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:09.115 [2024-12-16 22:14:58.524842] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:09.115 [2024-12-16 22:14:58.526158] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:09.115 [2024-12-16 22:14:58.526203] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:09.115 [2024-12-16 22:14:58.712701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.115 [2024-12-16 22:14:58.729977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.115 [2024-12-16 22:14:58.797212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.375 [2024-12-16 22:14:58.814825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:09.375 [2024-12-16 22:14:58.906261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.375 [2024-12-16 22:14:58.927720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:09.375 [2024-12-16 22:14:58.949952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.375 [2024-12-16 22:14:58.965639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:09.634 Running I/O for 1 seconds... 00:09:09.634 Running I/O for 1 seconds... 00:09:09.634 Running I/O for 1 seconds... 00:09:09.634 Running I/O for 1 seconds... 00:09:10.573 7557.00 IOPS, 29.52 MiB/s [2024-12-16T21:15:00.274Z] 10462.00 IOPS, 40.87 MiB/s 00:09:10.573 Latency(us) 00:09:10.573 [2024-12-16T21:15:00.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.573 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:10.573 Nvme1n1 : 1.01 10502.51 41.03 0.00 0.00 12134.27 7458.62 22843.98 00:09:10.573 [2024-12-16T21:15:00.274Z] =================================================================================================================== 00:09:10.573 [2024-12-16T21:15:00.274Z] Total : 10502.51 41.03 0.00 0.00 12134.27 7458.62 22843.98 00:09:10.573 00:09:10.573 Latency(us) 00:09:10.573 [2024-12-16T21:15:00.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.573 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:10.573 Nvme1n1 : 1.02 7570.92 29.57 0.00 0.00 16749.88 8113.98 29959.31 00:09:10.573 [2024-12-16T21:15:00.274Z] =================================================================================================================== 00:09:10.573 [2024-12-16T21:15:00.274Z] Total : 7570.92 29.57 0.00 0.00 16749.88 8113.98 29959.31 00:09:10.573 7607.00 IOPS, 29.71 MiB/s 00:09:10.573 Latency(us) 00:09:10.573 [2024-12-16T21:15:00.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.573 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:10.573 Nvme1n1 : 1.00 7716.04 30.14 0.00 0.00 16552.54 2980.33 36200.84 00:09:10.573 [2024-12-16T21:15:00.274Z] =================================================================================================================== 00:09:10.573 [2024-12-16T21:15:00.274Z] Total : 7716.04 30.14 0.00 0.00 16552.54 2980.33 36200.84 00:09:10.573 239776.00 IOPS, 936.62 MiB/s 00:09:10.573 Latency(us) 00:09:10.573 [2024-12-16T21:15:00.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.573 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:10.573 Nvme1n1 : 1.00 239406.46 935.18 0.00 0.00 531.92 235.03 1591.59 00:09:10.573 [2024-12-16T21:15:00.274Z] =================================================================================================================== 00:09:10.573 [2024-12-16T21:15:00.274Z] Total : 239406.46 935.18 0.00 0.00 531.92 235.03 1591.59 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 161234 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 161236 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 161239 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.833 rmmod nvme_tcp 00:09:10.833 rmmod nvme_fabrics 00:09:10.833 rmmod nvme_keyring 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 161200 ']' 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 161200 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 161200 ']' 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 161200 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161200 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161200' 00:09:10.833 killing process with pid 161200 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 161200 00:09:10.833 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 161200 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.093 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.999 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.999 00:09:12.999 real 0m10.742s 00:09:12.999 user 0m15.970s 00:09:12.999 sys 0m6.089s 00:09:12.999 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.999 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 ************************************ 00:09:12.999 END TEST nvmf_bdev_io_wait 00:09:12.999 ************************************ 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.258 ************************************ 00:09:13.258 START TEST nvmf_queue_depth 00:09:13.258 ************************************ 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.258 * Looking for test storage... 00:09:13.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.258 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.259 --rc genhtml_branch_coverage=1 00:09:13.259 --rc genhtml_function_coverage=1 00:09:13.259 --rc genhtml_legend=1 00:09:13.259 --rc geninfo_all_blocks=1 00:09:13.259 --rc geninfo_unexecuted_blocks=1 00:09:13.259 00:09:13.259 ' 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.259 --rc genhtml_branch_coverage=1 00:09:13.259 --rc genhtml_function_coverage=1 00:09:13.259 --rc genhtml_legend=1 00:09:13.259 --rc geninfo_all_blocks=1 00:09:13.259 --rc geninfo_unexecuted_blocks=1 00:09:13.259 00:09:13.259 ' 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.259 --rc genhtml_branch_coverage=1 00:09:13.259 --rc genhtml_function_coverage=1 00:09:13.259 --rc genhtml_legend=1 00:09:13.259 --rc geninfo_all_blocks=1 00:09:13.259 --rc geninfo_unexecuted_blocks=1 00:09:13.259 00:09:13.259 ' 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:13.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.259 --rc genhtml_branch_coverage=1 00:09:13.259 --rc genhtml_function_coverage=1 00:09:13.259 --rc genhtml_legend=1 00:09:13.259 --rc geninfo_all_blocks=1 00:09:13.259 --rc geninfo_unexecuted_blocks=1 00:09:13.259 00:09:13.259 ' 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.259 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.519 22:15:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:20.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:20.096 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.096 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:20.097 Found net devices under 0000:af:00.0: cvl_0_0 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:20.097 Found net devices under 0000:af:00.1: cvl_0_1 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:20.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:09:20.097 00:09:20.097 --- 10.0.0.2 ping statistics --- 00:09:20.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.097 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:20.097 00:09:20.097 --- 10.0.0.1 ping statistics --- 00:09:20.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.097 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=165690 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 165690 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165690 ']' 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.097 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 [2024-12-16 22:15:08.983346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:20.097 [2024-12-16 22:15:08.983389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.097 [2024-12-16 22:15:09.062737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.097 [2024-12-16 22:15:09.084160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.097 [2024-12-16 22:15:09.084199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.097 [2024-12-16 22:15:09.084206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.097 [2024-12-16 22:15:09.084213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.097 [2024-12-16 22:15:09.084218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.097 [2024-12-16 22:15:09.084710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 [2024-12-16 22:15:09.214469] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 Malloc0 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 [2024-12-16 22:15:09.264571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=165710 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 165710 /var/tmp/bdevperf.sock 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165710 ']' 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 [2024-12-16 22:15:09.312919] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:20.097 [2024-12-16 22:15:09.312962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165710 ] 00:09:20.097 [2024-12-16 22:15:09.386390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.097 [2024-12-16 22:15:09.408871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.097 NVMe0n1 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.097 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:20.357 Running I/O for 10 seconds... 00:09:22.233 11957.00 IOPS, 46.71 MiB/s [2024-12-16T21:15:12.872Z] 12240.50 IOPS, 47.81 MiB/s [2024-12-16T21:15:14.251Z] 12285.33 IOPS, 47.99 MiB/s [2024-12-16T21:15:15.188Z] 12309.25 IOPS, 48.08 MiB/s [2024-12-16T21:15:16.128Z] 12450.60 IOPS, 48.64 MiB/s [2024-12-16T21:15:17.064Z] 12445.50 IOPS, 48.62 MiB/s [2024-12-16T21:15:18.002Z] 12483.14 IOPS, 48.76 MiB/s [2024-12-16T21:15:18.939Z] 12512.75 IOPS, 48.88 MiB/s [2024-12-16T21:15:20.319Z] 12517.22 IOPS, 48.90 MiB/s [2024-12-16T21:15:20.319Z] 12562.30 IOPS, 49.07 MiB/s 00:09:30.618 Latency(us) 00:09:30.618 [2024-12-16T21:15:20.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.618 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:30.618 Verification LBA range: start 0x0 length 0x4000 00:09:30.618 NVMe0n1 : 10.11 12529.69 48.94 0.00 0.00 81153.46 19099.06 70903.71 00:09:30.618 [2024-12-16T21:15:20.319Z] =================================================================================================================== 00:09:30.618 [2024-12-16T21:15:20.319Z] Total : 12529.69 48.94 0.00 0.00 81153.46 19099.06 70903.71 00:09:30.618 { 00:09:30.618 "results": [ 00:09:30.618 { 00:09:30.618 "job": "NVMe0n1", 00:09:30.618 "core_mask": "0x1", 00:09:30.618 "workload": "verify", 00:09:30.618 "status": "finished", 00:09:30.618 "verify_range": { 00:09:30.618 "start": 0, 00:09:30.618 "length": 16384 00:09:30.618 }, 00:09:30.618 "queue_depth": 1024, 00:09:30.618 "io_size": 4096, 00:09:30.618 "runtime": 10.1052, 00:09:30.618 "iops": 12529.687685548035, 00:09:30.618 "mibps": 48.94409252167201, 00:09:30.618 "io_failed": 0, 00:09:30.618 "io_timeout": 0, 00:09:30.618 "avg_latency_us": 81153.46136287923, 00:09:30.618 "min_latency_us": 19099.062857142857, 00:09:30.618 "max_latency_us": 70903.71047619048 00:09:30.618 } 00:09:30.618 ], 00:09:30.618 "core_count": 1 00:09:30.618 } 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 165710 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165710 ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165710 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165710 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165710' 00:09:30.618 killing process with pid 165710 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165710 00:09:30.618 Received shutdown signal, test time was about 10.000000 seconds 00:09:30.618 00:09:30.618 Latency(us) 00:09:30.618 [2024-12-16T21:15:20.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.618 [2024-12-16T21:15:20.319Z] =================================================================================================================== 00:09:30.618 [2024-12-16T21:15:20.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165710 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.618 rmmod nvme_tcp 00:09:30.618 rmmod nvme_fabrics 00:09:30.618 rmmod nvme_keyring 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 165690 ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 165690 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165690 ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165690 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.618 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165690 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165690' 00:09:30.879 killing process with pid 165690 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165690 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165690 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.879 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.418 00:09:33.418 real 0m19.822s 00:09:33.418 user 0m23.398s 00:09:33.418 sys 0m5.930s 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.418 ************************************ 00:09:33.418 END TEST nvmf_queue_depth 00:09:33.418 ************************************ 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.418 ************************************ 00:09:33.418 START TEST nvmf_target_multipath 00:09:33.418 ************************************ 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:33.418 * Looking for test storage... 00:09:33.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.418 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.419 --rc genhtml_branch_coverage=1 00:09:33.419 --rc genhtml_function_coverage=1 00:09:33.419 --rc genhtml_legend=1 00:09:33.419 --rc geninfo_all_blocks=1 00:09:33.419 --rc geninfo_unexecuted_blocks=1 00:09:33.419 00:09:33.419 ' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.419 --rc genhtml_branch_coverage=1 00:09:33.419 --rc genhtml_function_coverage=1 00:09:33.419 --rc genhtml_legend=1 00:09:33.419 --rc geninfo_all_blocks=1 00:09:33.419 --rc geninfo_unexecuted_blocks=1 00:09:33.419 00:09:33.419 ' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.419 --rc genhtml_branch_coverage=1 00:09:33.419 --rc genhtml_function_coverage=1 00:09:33.419 --rc genhtml_legend=1 00:09:33.419 --rc geninfo_all_blocks=1 00:09:33.419 --rc geninfo_unexecuted_blocks=1 00:09:33.419 00:09:33.419 ' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.419 --rc genhtml_branch_coverage=1 00:09:33.419 --rc genhtml_function_coverage=1 00:09:33.419 --rc genhtml_legend=1 00:09:33.419 --rc geninfo_all_blocks=1 00:09:33.419 --rc geninfo_unexecuted_blocks=1 00:09:33.419 00:09:33.419 ' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.419 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.420 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:39.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:39.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:39.997 Found net devices under 0000:af:00.0: cvl_0_0 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:39.997 Found net devices under 0000:af:00.1: cvl_0_1 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:09:39.997 00:09:39.997 --- 10.0.0.2 ping statistics --- 00:09:39.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.997 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:09:39.997 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:09:39.997 00:09:39.997 --- 10.0.0.1 ping statistics --- 00:09:39.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.997 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:39.998 only one NIC for nvmf test 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.998 rmmod nvme_tcp 00:09:39.998 rmmod nvme_fabrics 00:09:39.998 rmmod nvme_keyring 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.998 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.378 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.378 00:09:41.378 real 0m8.371s 00:09:41.378 user 0m1.829s 00:09:41.378 sys 0m4.501s 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:41.378 ************************************ 00:09:41.378 END TEST nvmf_target_multipath 00:09:41.378 ************************************ 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.378 22:15:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.638 ************************************ 00:09:41.638 START TEST nvmf_zcopy 00:09:41.638 ************************************ 00:09:41.638 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:41.638 * Looking for test storage... 00:09:41.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.638 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:41.638 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:41.638 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:41.638 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:41.638 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:41.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.639 --rc genhtml_branch_coverage=1 00:09:41.639 --rc genhtml_function_coverage=1 00:09:41.639 --rc genhtml_legend=1 00:09:41.639 --rc geninfo_all_blocks=1 00:09:41.639 --rc geninfo_unexecuted_blocks=1 00:09:41.639 00:09:41.639 ' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:41.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.639 --rc genhtml_branch_coverage=1 00:09:41.639 --rc genhtml_function_coverage=1 00:09:41.639 --rc genhtml_legend=1 00:09:41.639 --rc geninfo_all_blocks=1 00:09:41.639 --rc geninfo_unexecuted_blocks=1 00:09:41.639 00:09:41.639 ' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:41.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.639 --rc genhtml_branch_coverage=1 00:09:41.639 --rc genhtml_function_coverage=1 00:09:41.639 --rc genhtml_legend=1 00:09:41.639 --rc geninfo_all_blocks=1 00:09:41.639 --rc geninfo_unexecuted_blocks=1 00:09:41.639 00:09:41.639 ' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:41.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.639 --rc genhtml_branch_coverage=1 00:09:41.639 --rc genhtml_function_coverage=1 00:09:41.639 --rc genhtml_legend=1 00:09:41.639 --rc geninfo_all_blocks=1 00:09:41.639 --rc geninfo_unexecuted_blocks=1 00:09:41.639 00:09:41.639 ' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.639 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.640 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:41.640 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:41.640 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.640 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:48.224 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.224 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:48.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:48.225 Found net devices under 0000:af:00.0: cvl_0_0 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.225 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:48.225 Found net devices under 0000:af:00.1: cvl_0_1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:09:48.225 00:09:48.225 --- 10.0.0.2 ping statistics --- 00:09:48.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.225 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:09:48.225 00:09:48.225 --- 10.0.0.1 ping statistics --- 00:09:48.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.225 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=174651 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 174651 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 174651 ']' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 [2024-12-16 22:15:37.338957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:48.225 [2024-12-16 22:15:37.339008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.225 [2024-12-16 22:15:37.414285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.225 [2024-12-16 22:15:37.435093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.225 [2024-12-16 22:15:37.435125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.225 [2024-12-16 22:15:37.435136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.225 [2024-12-16 22:15:37.435141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.225 [2024-12-16 22:15:37.435146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.225 [2024-12-16 22:15:37.435631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.225 [2024-12-16 22:15:37.577901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:48.225 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.226 [2024-12-16 22:15:37.598089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.226 malloc0 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:48.226 { 00:09:48.226 "params": { 00:09:48.226 "name": "Nvme$subsystem", 00:09:48.226 "trtype": "$TEST_TRANSPORT", 00:09:48.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.226 "adrfam": "ipv4", 00:09:48.226 "trsvcid": "$NVMF_PORT", 00:09:48.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.226 "hdgst": ${hdgst:-false}, 00:09:48.226 "ddgst": ${ddgst:-false} 00:09:48.226 }, 00:09:48.226 "method": "bdev_nvme_attach_controller" 00:09:48.226 } 00:09:48.226 EOF 00:09:48.226 )") 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:48.226 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:48.226 "params": { 00:09:48.226 "name": "Nvme1", 00:09:48.226 "trtype": "tcp", 00:09:48.226 "traddr": "10.0.0.2", 00:09:48.226 "adrfam": "ipv4", 00:09:48.226 "trsvcid": "4420", 00:09:48.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:48.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:48.226 "hdgst": false, 00:09:48.226 "ddgst": false 00:09:48.226 }, 00:09:48.226 "method": "bdev_nvme_attach_controller" 00:09:48.226 }' 00:09:48.226 [2024-12-16 22:15:37.680699] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:48.226 [2024-12-16 22:15:37.680738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174675 ] 00:09:48.226 [2024-12-16 22:15:37.753689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.226 [2024-12-16 22:15:37.776148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.485 Running I/O for 10 seconds... 00:09:50.799 8766.00 IOPS, 68.48 MiB/s [2024-12-16T21:15:41.437Z] 8798.00 IOPS, 68.73 MiB/s [2024-12-16T21:15:42.374Z] 8798.00 IOPS, 68.73 MiB/s [2024-12-16T21:15:43.311Z] 8833.00 IOPS, 69.01 MiB/s [2024-12-16T21:15:44.249Z] 8852.00 IOPS, 69.16 MiB/s [2024-12-16T21:15:45.186Z] 8865.83 IOPS, 69.26 MiB/s [2024-12-16T21:15:46.122Z] 8875.14 IOPS, 69.34 MiB/s [2024-12-16T21:15:47.500Z] 8880.75 IOPS, 69.38 MiB/s [2024-12-16T21:15:48.438Z] 8893.11 IOPS, 69.48 MiB/s [2024-12-16T21:15:48.438Z] 8896.70 IOPS, 69.51 MiB/s 00:09:58.737 Latency(us) 00:09:58.737 [2024-12-16T21:15:48.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:58.737 Verification LBA range: start 0x0 length 0x1000 00:09:58.737 Nvme1n1 : 10.01 8900.17 69.53 0.00 0.00 14340.55 1927.07 23343.30 00:09:58.737 [2024-12-16T21:15:48.438Z] =================================================================================================================== 00:09:58.737 [2024-12-16T21:15:48.438Z] Total : 8900.17 69.53 0.00 0.00 14340.55 1927.07 23343.30 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=176459 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:58.737 { 00:09:58.737 "params": { 00:09:58.737 "name": "Nvme$subsystem", 00:09:58.737 "trtype": "$TEST_TRANSPORT", 00:09:58.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.737 "adrfam": "ipv4", 00:09:58.737 "trsvcid": "$NVMF_PORT", 00:09:58.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.737 "hdgst": ${hdgst:-false}, 00:09:58.737 "ddgst": ${ddgst:-false} 00:09:58.737 }, 00:09:58.737 "method": "bdev_nvme_attach_controller" 00:09:58.737 } 00:09:58.737 EOF 00:09:58.737 )") 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:58.737 [2024-12-16 22:15:48.284007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.284047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:58.737 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:58.737 "params": { 00:09:58.737 "name": "Nvme1", 00:09:58.737 "trtype": "tcp", 00:09:58.737 "traddr": "10.0.0.2", 00:09:58.737 "adrfam": "ipv4", 00:09:58.737 "trsvcid": "4420", 00:09:58.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.737 "hdgst": false, 00:09:58.737 "ddgst": false 00:09:58.737 }, 00:09:58.737 "method": "bdev_nvme_attach_controller" 00:09:58.737 }' 00:09:58.737 [2024-12-16 22:15:48.296016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.296029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.308040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.308051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.320072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.320082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.321786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:58.737 [2024-12-16 22:15:48.321831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176459 ] 00:09:58.737 [2024-12-16 22:15:48.332108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.332120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.344139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.344149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.356172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.356182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.368212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.368227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.380239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.380249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.392266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.392276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.395043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.737 [2024-12-16 22:15:48.404301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.404315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.416334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.416354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.737 [2024-12-16 22:15:48.417412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.737 [2024-12-16 22:15:48.428375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.737 [2024-12-16 22:15:48.428391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.440405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.440422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.452436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.452449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.464465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.464488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.476501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.476513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.488530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.488542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.500561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.500570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.512619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.512641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.524642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.524657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.536666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.536681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.548697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.548712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.560727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.560736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.572760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.572770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.584799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.584814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.596828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.596839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.608862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.608871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.620896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.620906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.632933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.632951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.644965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.644975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.656995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.657004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.669029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.669040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.681064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.681074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 [2024-12-16 22:15:48.693104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.997 [2024-12-16 22:15:48.693121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.997 Running I/O for 5 seconds... 00:09:59.257 [2024-12-16 22:15:48.708934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.708958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.722715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.722734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.736668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.736687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.750927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.750946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.764115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.764134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.777948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.777966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.791473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.791491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.805689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.805708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.819281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.819300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.832912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.832930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.846398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.846417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.860369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.860389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.873728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.873747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.887409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.887432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.901310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.901328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.914744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.914763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.928524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.928543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.942286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.942305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.257 [2024-12-16 22:15:48.956121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.257 [2024-12-16 22:15:48.956139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:48.969884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:48.969904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:48.983365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:48.983384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:48.996829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:48.996848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:49.010508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:49.010526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:49.024034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:49.024053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:49.037514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:49.037532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:49.050788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:49.050807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:49.064847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:49.064866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.516 [2024-12-16 22:15:49.078585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.516 [2024-12-16 22:15:49.078603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.092200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.092218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.105872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.105890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.119459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.119478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.133255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.133273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.146846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.146871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.160534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.160553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.174143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.174163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.187794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.187814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.201361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.201380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.517 [2024-12-16 22:15:49.215314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.517 [2024-12-16 22:15:49.215334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.228831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.228851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.242266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.242286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.255635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.255655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.269321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.269341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.282881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.282901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.296340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.296360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.310144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.310163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.323847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.323866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.337575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.337594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.351052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.351071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.364826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.364846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.378873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.378892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.392819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.392839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.406657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.406675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.420660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.420679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.434688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.434707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.445746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.445765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.459761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.459780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.776 [2024-12-16 22:15:49.473209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.776 [2024-12-16 22:15:49.473228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.486651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.486670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.500257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.500275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.513672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.513691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.527894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.527912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.541498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.541517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.555282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.555302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.568603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.568622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.581911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.581930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.595687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.595707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.609348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.609367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.622962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.622981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.636801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.636830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.650235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.650253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.663851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.663870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.677731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.677750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.691185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.691209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 17059.00 IOPS, 133.27 MiB/s [2024-12-16T21:15:49.737Z] [2024-12-16 22:15:49.705239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.705257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.716554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.716572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.036 [2024-12-16 22:15:49.731003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.036 [2024-12-16 22:15:49.731026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.744696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.744714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.758353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.758371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.772147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.772167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.785627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.785645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.799382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.799401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.813131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.813149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.826749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.826768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.840489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.840507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.854081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.854099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.867960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.867983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.881554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.881572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.895254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.895273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.909202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.295 [2024-12-16 22:15:49.909220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.295 [2024-12-16 22:15:49.922409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.296 [2024-12-16 22:15:49.922428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.296 [2024-12-16 22:15:49.935793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.296 [2024-12-16 22:15:49.935811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.296 [2024-12-16 22:15:49.949320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.296 [2024-12-16 22:15:49.949338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.296 [2024-12-16 22:15:49.962854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.296 [2024-12-16 22:15:49.962871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.296 [2024-12-16 22:15:49.976793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.296 [2024-12-16 22:15:49.976812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.296 [2024-12-16 22:15:49.990627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.296 [2024-12-16 22:15:49.990646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.005411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.005436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.021051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.021071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.035284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.035305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.049187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.049213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.062756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.062782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.076704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.076724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.091052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.091071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.105860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.105879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.121355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.121374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.135470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.135488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.149231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.149250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.162719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.162737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.176349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.176384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.190147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.190166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.203989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.555 [2024-12-16 22:15:50.204008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.555 [2024-12-16 22:15:50.217681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.556 [2024-12-16 22:15:50.217701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.556 [2024-12-16 22:15:50.231252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.556 [2024-12-16 22:15:50.231271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.556 [2024-12-16 22:15:50.244988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.556 [2024-12-16 22:15:50.245007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.815 [2024-12-16 22:15:50.258727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.815 [2024-12-16 22:15:50.258746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.815 [2024-12-16 22:15:50.272760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.815 [2024-12-16 22:15:50.272779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.815 [2024-12-16 22:15:50.286365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.815 [2024-12-16 22:15:50.286384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.815 [2024-12-16 22:15:50.300006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.815 [2024-12-16 22:15:50.300025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.815 [2024-12-16 22:15:50.313668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.815 [2024-12-16 22:15:50.313687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.815 [2024-12-16 22:15:50.327434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.815 [2024-12-16 22:15:50.327453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.340959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.340978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.354772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.354790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.368294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.368312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.382775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.382794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.398428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.398447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.412437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.412456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.426534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.426553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.440287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.440309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.454150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.454168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.467859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.467878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.481562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.481580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.495416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.495435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.816 [2024-12-16 22:15:50.509202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.816 [2024-12-16 22:15:50.509222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.075 [2024-12-16 22:15:50.522795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.075 [2024-12-16 22:15:50.522815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.075 [2024-12-16 22:15:50.536182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.075 [2024-12-16 22:15:50.536207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.075 [2024-12-16 22:15:50.550179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.075 [2024-12-16 22:15:50.550205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.075 [2024-12-16 22:15:50.564105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.075 [2024-12-16 22:15:50.564124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.577976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.577995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.591804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.591823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.605773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.605792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.619342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.619360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.632915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.632936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.646898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.646918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.660247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.660267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.673962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.673981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.687733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.687752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.701430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.701454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 17033.50 IOPS, 133.07 MiB/s [2024-12-16T21:15:50.777Z] [2024-12-16 22:15:50.714867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.714886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.729012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.729031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.742475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.742494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.756250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.756269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.076 [2024-12-16 22:15:50.769858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.076 [2024-12-16 22:15:50.769877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.783795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.783814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.797527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.797546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.811097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.811116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.824726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.824745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.838640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.838659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.852371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.852400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.865757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.865776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.879300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.879318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.893003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.893021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.906762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.906780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.920481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.920501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.933973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.933991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.948035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.948054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.961699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.961718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.975315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.975333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:50.988907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:50.988925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:51.002933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:51.002951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:51.013912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:51.013930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.336 [2024-12-16 22:15:51.027939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.336 [2024-12-16 22:15:51.027958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.595 [2024-12-16 22:15:51.041817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.595 [2024-12-16 22:15:51.041836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.595 [2024-12-16 22:15:51.055727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.595 [2024-12-16 22:15:51.055746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.595 [2024-12-16 22:15:51.069280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.595 [2024-12-16 22:15:51.069297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.595 [2024-12-16 22:15:51.083071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.083090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.096874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.096892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.110765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.110783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.124131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.124150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.137776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.137794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.151257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.151276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.165079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.165098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.178781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.178800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.192713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.192731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.206126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.206145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.220276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.220295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.233990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.234008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.247439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.247458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.261050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.261068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.274593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.274611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.596 [2024-12-16 22:15:51.288233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.596 [2024-12-16 22:15:51.288252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.301712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.301731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.315549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.315567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.328804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.328822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.342689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.342708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.356806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.356826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.370517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.370536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.384018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.384037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.397605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.397624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.411843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.411863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.425486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.425505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.438943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.438962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.452685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.452703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.466395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.466414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.479996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.480015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.493534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.493552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.507177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.507201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.520855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.520874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.534495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.534515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.855 [2024-12-16 22:15:51.548055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.855 [2024-12-16 22:15:51.548076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.561799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.561818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.575296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.575315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.588679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.588698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.602260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.602279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.615970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.615988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.629554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.629572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.643274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.643293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.656915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.656935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.670430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.670450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.684030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.684049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.698163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.698182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 17077.00 IOPS, 133.41 MiB/s [2024-12-16T21:15:51.816Z] [2024-12-16 22:15:51.711856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.711874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.725752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.725775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.739282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.739302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.752962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.752981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.766898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.766917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.780468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.780486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.793800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.793818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.115 [2024-12-16 22:15:51.807126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.115 [2024-12-16 22:15:51.807145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.821071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.821089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.834988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.835006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.848936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.848955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.862404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.862422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.875763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.875782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.889589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.889608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.903318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.903337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.917324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.917343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.931579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.931599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.942322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.942341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.956398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.956416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.970316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.970335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.984493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.984517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:51.997803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:51.997823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:52.011686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:52.011704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:52.025129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:52.025148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:52.039079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:52.039098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:52.052965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:52.052984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.375 [2024-12-16 22:15:52.066699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.375 [2024-12-16 22:15:52.066718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.634 [2024-12-16 22:15:52.080285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.634 [2024-12-16 22:15:52.080305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.094242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.094262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.108060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.108080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.121521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.121540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.135284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.135303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.149427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.149447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.163088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.163107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.177005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.177024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.190713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.190732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.204558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.204579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.218131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.218151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.231915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.231934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.245592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.245615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.259579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.259598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.273149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.273167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.286969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.286987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.301158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.301176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.312239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.312258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.635 [2024-12-16 22:15:52.326201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.635 [2024-12-16 22:15:52.326219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.339491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.339509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.353200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.353219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.366805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.366824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.380489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.380507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.394218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.394235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.408366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.408385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.421616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.421635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.435674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.435694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.449416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.449434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.462968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.462989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.476862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.476880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.490820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.490838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.504718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.504741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.518354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.518372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.531761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.531780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.545291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.545311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.894 [2024-12-16 22:15:52.558980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.894 [2024-12-16 22:15:52.558998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.895 [2024-12-16 22:15:52.572563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.895 [2024-12-16 22:15:52.572582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.895 [2024-12-16 22:15:52.586351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.895 [2024-12-16 22:15:52.586369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.599997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.600019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.613936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.613956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.627221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.627239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.640874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.640893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.654622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.654641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.668221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.668239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.681882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.681903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.695505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.695525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 17079.50 IOPS, 133.43 MiB/s [2024-12-16T21:15:52.855Z] [2024-12-16 22:15:52.709245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.709264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.723183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.723207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.736977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.736995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.750660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.750678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.764707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.764725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.775184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.775207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.789211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.789229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.803392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.803409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.818394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.818412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.832051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.832070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.154 [2024-12-16 22:15:52.845556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.154 [2024-12-16 22:15:52.845574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.859398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.859417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.873251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.873270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.886838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.886860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.900350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.900369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.914729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.914747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.930151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.930170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.943882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.943901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.957228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.957246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.970803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.970821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.984853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.984871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:52.998910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:52.998929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:53.012857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:53.012875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.414 [2024-12-16 22:15:53.026813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.414 [2024-12-16 22:15:53.026831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-12-16 22:15:53.041197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-12-16 22:15:53.041215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-12-16 22:15:53.054841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-12-16 22:15:53.054859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-12-16 22:15:53.068184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-12-16 22:15:53.068207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-12-16 22:15:53.081562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-12-16 22:15:53.081580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-12-16 22:15:53.095000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-12-16 22:15:53.095019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.415 [2024-12-16 22:15:53.109144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.415 [2024-12-16 22:15:53.109162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.120047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.120065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.133940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.133959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.147035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.147054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.160778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.160796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.174182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.174205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.187942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.187960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.201560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.201579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.214814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.214832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.228991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.229010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.242926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.674 [2024-12-16 22:15:53.242944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.674 [2024-12-16 22:15:53.256747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.256766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.270202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.270228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.283444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.283462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.296904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.296924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.310790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.310810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.324100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.324120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.337955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.337974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.351308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.351327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.675 [2024-12-16 22:15:53.364899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.675 [2024-12-16 22:15:53.364918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.378458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.378476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.392255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.392274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.406165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.406183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.416650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.416670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.430330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.430349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.443954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.443973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.457752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.457771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.471166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.471185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.485045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.485065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.498589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.498607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.512735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.512754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.523376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.523400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.537403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.537422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.551211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.551230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.564565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.564584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.578738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.578757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.934 [2024-12-16 22:15:53.589157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.934 [2024-12-16 22:15:53.589175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.935 [2024-12-16 22:15:53.603361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.935 [2024-12-16 22:15:53.603380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.935 [2024-12-16 22:15:53.616675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.935 [2024-12-16 22:15:53.616694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.935 [2024-12-16 22:15:53.630385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.935 [2024-12-16 22:15:53.630404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.194 [2024-12-16 22:15:53.644314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.644332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.657913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.657931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.671772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.671790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.685520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.685537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.699195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.699214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 17097.00 IOPS, 133.57 MiB/s [2024-12-16T21:15:53.896Z] [2024-12-16 22:15:53.712298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.712329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 00:10:04.195 Latency(us) 00:10:04.195 [2024-12-16T21:15:53.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.195 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:04.195 Nvme1n1 : 5.01 17098.67 133.58 0.00 0.00 7478.58 3542.06 18350.08 00:10:04.195 [2024-12-16T21:15:53.896Z] =================================================================================================================== 00:10:04.195 [2024-12-16T21:15:53.896Z] Total : 17098.67 133.58 0.00 0.00 7478.58 3542.06 18350.08 00:10:04.195 [2024-12-16 22:15:53.721432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.721451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.733461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.733483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.745502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.745520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.757528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.757543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.769561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.769575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.781586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.781600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.793619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.793632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.805650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.805663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.817680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.817694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.829711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.829722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.841744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.841755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.853774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.853786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 [2024-12-16 22:15:53.865805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.195 [2024-12-16 22:15:53.865815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (176459) - No such process 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 176459 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.195 delay0 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.195 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.454 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.454 22:15:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:04.454 [2024-12-16 22:15:54.014925] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:11.072 [2024-12-16 22:16:00.629701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9500 is same with the state(6) to be set 00:10:11.072 [2024-12-16 22:16:00.629744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9500 is same with the state(6) to be set 00:10:11.072 Initializing NVMe Controllers 00:10:11.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.072 Initialization complete. Launching workers. 00:10:11.072 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3582 00:10:11.072 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3854, failed to submit 48 00:10:11.072 success 3655, unsuccessful 199, failed 0 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.072 rmmod nvme_tcp 00:10:11.072 rmmod nvme_fabrics 00:10:11.072 rmmod nvme_keyring 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 174651 ']' 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 174651 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 174651 ']' 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 174651 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174651 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174651' 00:10:11.072 killing process with pid 174651 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 174651 00:10:11.072 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 174651 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.332 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.870 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.870 00:10:13.870 real 0m31.895s 00:10:13.870 user 0m44.107s 00:10:13.870 sys 0m9.990s 00:10:13.870 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.870 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.870 ************************************ 00:10:13.870 END TEST nvmf_zcopy 00:10:13.870 ************************************ 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.870 ************************************ 00:10:13.870 START TEST nvmf_nmic 00:10:13.870 ************************************ 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.870 * Looking for test storage... 00:10:13.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.870 --rc genhtml_branch_coverage=1 00:10:13.870 --rc genhtml_function_coverage=1 00:10:13.870 --rc genhtml_legend=1 00:10:13.870 --rc geninfo_all_blocks=1 00:10:13.870 --rc geninfo_unexecuted_blocks=1 00:10:13.870 00:10:13.870 ' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.870 --rc genhtml_branch_coverage=1 00:10:13.870 --rc genhtml_function_coverage=1 00:10:13.870 --rc genhtml_legend=1 00:10:13.870 --rc geninfo_all_blocks=1 00:10:13.870 --rc geninfo_unexecuted_blocks=1 00:10:13.870 00:10:13.870 ' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.870 --rc genhtml_branch_coverage=1 00:10:13.870 --rc genhtml_function_coverage=1 00:10:13.870 --rc genhtml_legend=1 00:10:13.870 --rc geninfo_all_blocks=1 00:10:13.870 --rc geninfo_unexecuted_blocks=1 00:10:13.870 00:10:13.870 ' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.870 --rc genhtml_branch_coverage=1 00:10:13.870 --rc genhtml_function_coverage=1 00:10:13.870 --rc genhtml_legend=1 00:10:13.870 --rc geninfo_all_blocks=1 00:10:13.870 --rc geninfo_unexecuted_blocks=1 00:10:13.870 00:10:13.870 ' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.870 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.871 22:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:20.448 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:20.448 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:20.448 Found net devices under 0000:af:00.0: cvl_0_0 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:20.448 Found net devices under 0000:af:00.1: cvl_0_1 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.448 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:10:20.448 00:10:20.448 --- 10.0.0.2 ping statistics --- 00:10:20.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.448 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:10:20.448 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:10:20.448 00:10:20.448 --- 10.0.0.1 ping statistics --- 00:10:20.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.448 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=181951 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 181951 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 181951 ']' 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 [2024-12-16 22:16:09.305530] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:20.449 [2024-12-16 22:16:09.305570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.449 [2024-12-16 22:16:09.365134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.449 [2024-12-16 22:16:09.389141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.449 [2024-12-16 22:16:09.389177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.449 [2024-12-16 22:16:09.389184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.449 [2024-12-16 22:16:09.389194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.449 [2024-12-16 22:16:09.389200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.449 [2024-12-16 22:16:09.394223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.449 [2024-12-16 22:16:09.394260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.449 [2024-12-16 22:16:09.394371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.449 [2024-12-16 22:16:09.394372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 [2024-12-16 22:16:09.530648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 Malloc0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 [2024-12-16 22:16:09.605385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:20.449 test case1: single bdev can't be used in multiple subsystems 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 [2024-12-16 22:16:09.633280] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:20.449 [2024-12-16 22:16:09.633300] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:20.449 [2024-12-16 22:16:09.633308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 request: 00:10:20.449 { 00:10:20.449 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:20.449 "namespace": { 00:10:20.449 "bdev_name": "Malloc0", 00:10:20.449 "no_auto_visible": false, 00:10:20.449 "hide_metadata": false 00:10:20.449 }, 00:10:20.449 "method": "nvmf_subsystem_add_ns", 00:10:20.449 "req_id": 1 00:10:20.449 } 00:10:20.449 Got JSON-RPC error response 00:10:20.449 response: 00:10:20.449 { 00:10:20.449 "code": -32602, 00:10:20.449 "message": "Invalid parameters" 00:10:20.449 } 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:20.449 Adding namespace failed - expected result. 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:20.449 test case2: host connect to nvmf target in multiple paths 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.449 [2024-12-16 22:16:09.645427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.449 22:16:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.387 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:22.765 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.765 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.765 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.765 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:22.765 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:24.670 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.670 [global] 00:10:24.670 thread=1 00:10:24.670 invalidate=1 00:10:24.670 rw=write 00:10:24.670 time_based=1 00:10:24.670 runtime=1 00:10:24.670 ioengine=libaio 00:10:24.670 direct=1 00:10:24.670 bs=4096 00:10:24.670 iodepth=1 00:10:24.670 norandommap=0 00:10:24.670 numjobs=1 00:10:24.670 00:10:24.670 verify_dump=1 00:10:24.670 verify_backlog=512 00:10:24.670 verify_state_save=0 00:10:24.670 do_verify=1 00:10:24.670 verify=crc32c-intel 00:10:24.670 [job0] 00:10:24.670 filename=/dev/nvme0n1 00:10:24.670 Could not set queue depth (nvme0n1) 00:10:24.928 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.928 fio-3.35 00:10:24.928 Starting 1 thread 00:10:26.301 00:10:26.301 job0: (groupid=0, jobs=1): err= 0: pid=183008: Mon Dec 16 22:16:15 2024 00:10:26.301 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:10:26.301 slat (nsec): min=9740, max=28331, avg=22150.64, stdev=3344.42 00:10:26.301 clat (usec): min=40764, max=42011, avg=41029.11, stdev=244.23 00:10:26.301 lat (usec): min=40786, max=42039, avg=41051.26, stdev=244.65 00:10:26.301 clat percentiles (usec): 00:10:26.301 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:26.301 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:26.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:26.301 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:26.301 | 99.99th=[42206] 00:10:26.301 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:26.301 slat (usec): min=10, max=28207, avg=66.79, stdev=1246.11 00:10:26.301 clat (usec): min=115, max=312, avg=134.70, stdev=18.44 00:10:26.301 lat (usec): min=127, max=28506, avg=201.49, stdev=1253.50 00:10:26.301 clat percentiles (usec): 00:10:26.301 | 1.00th=[ 120], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 124], 00:10:26.301 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 130], 00:10:26.301 | 70.00th=[ 135], 80.00th=[ 151], 90.00th=[ 163], 95.00th=[ 167], 00:10:26.301 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 314], 99.95th=[ 314], 00:10:26.301 | 99.99th=[ 314] 00:10:26.301 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.301 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.301 lat (usec) : 250=95.51%, 500=0.37% 00:10:26.301 lat (msec) : 50=4.12% 00:10:26.301 cpu : usr=0.79%, sys=0.50%, ctx=537, majf=0, minf=1 00:10:26.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.301 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.301 00:10:26.301 Run status group 0 (all jobs): 00:10:26.301 READ: bw=87.3KiB/s (89.4kB/s), 87.3KiB/s-87.3KiB/s (89.4kB/s-89.4kB/s), io=88.0KiB (90.1kB), run=1008-1008msec 00:10:26.301 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:10:26.301 00:10:26.301 Disk stats (read/write): 00:10:26.301 nvme0n1: ios=45/512, merge=0/0, ticks=1767/67, in_queue=1834, util=98.50% 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.301 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.559 rmmod nvme_tcp 00:10:26.559 rmmod nvme_fabrics 00:10:26.559 rmmod nvme_keyring 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 181951 ']' 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 181951 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 181951 ']' 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 181951 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181951 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.559 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181951' 00:10:26.559 killing process with pid 181951 00:10:26.560 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 181951 00:10:26.560 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 181951 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.819 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.727 00:10:28.727 real 0m15.307s 00:10:28.727 user 0m34.815s 00:10:28.727 sys 0m5.381s 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.727 ************************************ 00:10:28.727 END TEST nvmf_nmic 00:10:28.727 ************************************ 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.727 22:16:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.987 ************************************ 00:10:28.987 START TEST nvmf_fio_target 00:10:28.987 ************************************ 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.987 * Looking for test storage... 00:10:28.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.987 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.988 --rc genhtml_branch_coverage=1 00:10:28.988 --rc genhtml_function_coverage=1 00:10:28.988 --rc genhtml_legend=1 00:10:28.988 --rc geninfo_all_blocks=1 00:10:28.988 --rc geninfo_unexecuted_blocks=1 00:10:28.988 00:10:28.988 ' 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.988 --rc genhtml_branch_coverage=1 00:10:28.988 --rc genhtml_function_coverage=1 00:10:28.988 --rc genhtml_legend=1 00:10:28.988 --rc geninfo_all_blocks=1 00:10:28.988 --rc geninfo_unexecuted_blocks=1 00:10:28.988 00:10:28.988 ' 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.988 --rc genhtml_branch_coverage=1 00:10:28.988 --rc genhtml_function_coverage=1 00:10:28.988 --rc genhtml_legend=1 00:10:28.988 --rc geninfo_all_blocks=1 00:10:28.988 --rc geninfo_unexecuted_blocks=1 00:10:28.988 00:10:28.988 ' 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.988 --rc genhtml_branch_coverage=1 00:10:28.988 --rc genhtml_function_coverage=1 00:10:28.988 --rc genhtml_legend=1 00:10:28.988 --rc geninfo_all_blocks=1 00:10:28.988 --rc geninfo_unexecuted_blocks=1 00:10:28.988 00:10:28.988 ' 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.988 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.989 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:35.567 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:35.567 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:35.567 Found net devices under 0000:af:00.0: cvl_0_0 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:35.567 Found net devices under 0000:af:00.1: cvl_0_1 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:10:35.567 00:10:35.567 --- 10.0.0.2 ping statistics --- 00:10:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.567 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:10:35.567 00:10:35.567 --- 10.0.0.1 ping statistics --- 00:10:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.567 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=186709 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 186709 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 186709 ']' 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.567 [2024-12-16 22:16:24.660075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:35.567 [2024-12-16 22:16:24.660118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.567 [2024-12-16 22:16:24.734733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.567 [2024-12-16 22:16:24.757802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.567 [2024-12-16 22:16:24.757835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.567 [2024-12-16 22:16:24.757843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.567 [2024-12-16 22:16:24.757849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.567 [2024-12-16 22:16:24.757854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.567 [2024-12-16 22:16:24.759163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.567 [2024-12-16 22:16:24.759274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.567 [2024-12-16 22:16:24.759300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.567 [2024-12-16 22:16:24.759301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.567 22:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:35.567 [2024-12-16 22:16:25.060165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.567 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.827 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:35.827 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.084 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:36.084 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.084 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:36.084 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.342 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:36.342 22:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:36.600 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.857 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:36.858 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.115 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:37.115 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.373 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:37.373 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:37.373 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.630 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:37.630 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.887 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:37.887 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:38.144 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.144 [2024-12-16 22:16:27.770073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.144 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:38.401 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:38.658 22:16:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.027 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:40.027 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:40.027 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.027 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:40.027 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:40.027 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:41.922 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:41.922 [global] 00:10:41.922 thread=1 00:10:41.922 invalidate=1 00:10:41.922 rw=write 00:10:41.922 time_based=1 00:10:41.922 runtime=1 00:10:41.922 ioengine=libaio 00:10:41.922 direct=1 00:10:41.922 bs=4096 00:10:41.922 iodepth=1 00:10:41.922 norandommap=0 00:10:41.922 numjobs=1 00:10:41.922 00:10:41.922 verify_dump=1 00:10:41.922 verify_backlog=512 00:10:41.922 verify_state_save=0 00:10:41.922 do_verify=1 00:10:41.922 verify=crc32c-intel 00:10:41.922 [job0] 00:10:41.922 filename=/dev/nvme0n1 00:10:41.922 [job1] 00:10:41.922 filename=/dev/nvme0n2 00:10:41.922 [job2] 00:10:41.922 filename=/dev/nvme0n3 00:10:41.922 [job3] 00:10:41.922 filename=/dev/nvme0n4 00:10:41.922 Could not set queue depth (nvme0n1) 00:10:41.922 Could not set queue depth (nvme0n2) 00:10:41.922 Could not set queue depth (nvme0n3) 00:10:41.922 Could not set queue depth (nvme0n4) 00:10:42.179 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.179 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.179 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.179 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:42.179 fio-3.35 00:10:42.179 Starting 4 threads 00:10:43.551 00:10:43.551 job0: (groupid=0, jobs=1): err= 0: pid=188036: Mon Dec 16 22:16:33 2024 00:10:43.551 read: IOPS=22, BW=89.8KiB/s (92.0kB/s)(92.0KiB/1024msec) 00:10:43.551 slat (nsec): min=9281, max=22665, avg=21543.35, stdev=2719.97 00:10:43.551 clat (usec): min=220, max=41176, avg=39200.99, stdev=8497.61 00:10:43.551 lat (usec): min=241, max=41197, avg=39222.54, stdev=8497.90 00:10:43.551 clat percentiles (usec): 00:10:43.551 | 1.00th=[ 221], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:43.551 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:43.551 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:43.551 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:43.551 | 99.99th=[41157] 00:10:43.551 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:43.551 slat (nsec): min=10569, max=47806, avg=12338.17, stdev=2820.13 00:10:43.551 clat (usec): min=121, max=391, avg=221.96, stdev=49.99 00:10:43.551 lat (usec): min=133, max=402, avg=234.30, stdev=50.27 00:10:43.551 clat percentiles (usec): 00:10:43.551 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 167], 20.00th=[ 186], 00:10:43.551 | 30.00th=[ 194], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 229], 00:10:43.551 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 306], 95.00th=[ 334], 00:10:43.551 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 392], 99.95th=[ 392], 00:10:43.551 | 99.99th=[ 392] 00:10:43.551 bw ( KiB/s): min= 4096, max= 4096, per=24.17%, avg=4096.00, stdev= 0.00, samples=1 00:10:43.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:43.551 lat (usec) : 250=77.57%, 500=18.32% 00:10:43.551 lat (msec) : 50=4.11% 00:10:43.551 cpu : usr=0.59%, sys=0.68%, ctx=535, majf=0, minf=1 00:10:43.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.551 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.551 job1: (groupid=0, jobs=1): err= 0: pid=188037: Mon Dec 16 22:16:33 2024 00:10:43.551 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:10:43.551 slat (nsec): min=10861, max=24249, avg=22654.14, stdev=2653.45 00:10:43.551 clat (usec): min=40878, max=41952, avg=41050.69, stdev=291.36 00:10:43.551 lat (usec): min=40889, max=41975, avg=41073.34, stdev=291.68 00:10:43.551 clat percentiles (usec): 00:10:43.551 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:43.551 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:43.551 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:43.551 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:43.551 | 99.99th=[42206] 00:10:43.551 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:43.551 slat (nsec): min=9632, max=60793, avg=11343.77, stdev=4029.15 00:10:43.551 clat (usec): min=141, max=382, avg=175.78, stdev=23.92 00:10:43.551 lat (usec): min=152, max=443, avg=187.12, stdev=25.41 00:10:43.551 clat percentiles (usec): 00:10:43.551 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:43.551 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:43.551 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 225], 00:10:43.551 | 99.00th=[ 253], 99.50th=[ 314], 99.90th=[ 383], 99.95th=[ 383], 00:10:43.551 | 99.99th=[ 383] 00:10:43.551 bw ( KiB/s): min= 4096, max= 4096, per=24.17%, avg=4096.00, stdev= 0.00, samples=1 00:10:43.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:43.551 lat (usec) : 250=94.57%, 500=1.31% 00:10:43.551 lat (msec) : 50=4.12% 00:10:43.551 cpu : usr=0.20%, sys=0.70%, ctx=536, majf=0, minf=1 00:10:43.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.552 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.552 job2: (groupid=0, jobs=1): err= 0: pid=188039: Mon Dec 16 22:16:33 2024 00:10:43.552 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:43.552 slat (nsec): min=7066, max=44592, avg=8074.18, stdev=1579.40 00:10:43.552 clat (usec): min=162, max=481, avg=194.82, stdev=17.08 00:10:43.552 lat (usec): min=170, max=490, avg=202.89, stdev=17.22 00:10:43.552 clat percentiles (usec): 00:10:43.552 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:10:43.552 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:10:43.552 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 210], 95.00th=[ 219], 00:10:43.552 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 383], 99.95th=[ 482], 00:10:43.552 | 99.99th=[ 482] 00:10:43.552 write: IOPS=2808, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:10:43.552 slat (nsec): min=9944, max=43183, avg=11288.15, stdev=1941.58 00:10:43.552 clat (usec): min=118, max=864, avg=154.18, stdev=41.07 00:10:43.552 lat (usec): min=128, max=874, avg=165.47, stdev=41.38 00:10:43.552 clat percentiles (usec): 00:10:43.552 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 131], 00:10:43.552 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:10:43.552 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 219], 95.00th=[ 237], 00:10:43.552 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 611], 99.95th=[ 717], 00:10:43.552 | 99.99th=[ 865] 00:10:43.552 bw ( KiB/s): min=12288, max=12288, per=72.51%, avg=12288.00, stdev= 0.00, samples=1 00:10:43.552 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:43.552 lat (usec) : 250=98.08%, 500=1.86%, 750=0.04%, 1000=0.02% 00:10:43.552 cpu : usr=5.40%, sys=7.30%, ctx=5371, majf=0, minf=1 00:10:43.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.552 issued rwts: total=2560,2811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.552 job3: (groupid=0, jobs=1): err= 0: pid=188040: Mon Dec 16 22:16:33 2024 00:10:43.552 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:10:43.552 slat (nsec): min=9724, max=23835, avg=20939.22, stdev=4666.54 00:10:43.552 clat (usec): min=284, max=42052, avg=39585.70, stdev=8580.86 00:10:43.552 lat (usec): min=306, max=42076, avg=39606.64, stdev=8580.47 00:10:43.552 clat percentiles (usec): 00:10:43.552 | 1.00th=[ 285], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:43.552 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:43.552 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:43.552 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:43.552 | 99.99th=[42206] 00:10:43.552 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:43.552 slat (nsec): min=9919, max=47524, avg=11740.27, stdev=2852.33 00:10:43.552 clat (usec): min=135, max=418, avg=209.88, stdev=33.02 00:10:43.552 lat (usec): min=146, max=462, avg=221.62, stdev=33.38 00:10:43.552 clat percentiles (usec): 00:10:43.552 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 182], 00:10:43.552 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 212], 60.00th=[ 221], 00:10:43.552 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 255], 00:10:43.552 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 420], 99.95th=[ 420], 00:10:43.552 | 99.99th=[ 420] 00:10:43.552 bw ( KiB/s): min= 4096, max= 4096, per=24.17%, avg=4096.00, stdev= 0.00, samples=1 00:10:43.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:43.552 lat (usec) : 250=88.22%, 500=7.66% 00:10:43.552 lat (msec) : 50=4.11% 00:10:43.552 cpu : usr=0.29%, sys=0.59%, ctx=536, majf=0, minf=1 00:10:43.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.552 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.552 00:10:43.552 Run status group 0 (all jobs): 00:10:43.552 READ: bw=10.0MiB/s (10.5MB/s), 87.9KiB/s-9.99MiB/s (90.0kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1026msec 00:10:43.552 WRITE: bw=16.5MiB/s (17.4MB/s), 1996KiB/s-11.0MiB/s (2044kB/s-11.5MB/s), io=17.0MiB (17.8MB), run=1001-1026msec 00:10:43.552 00:10:43.552 Disk stats (read/write): 00:10:43.552 nvme0n1: ios=68/512, merge=0/0, ticks=721/108, in_queue=829, util=86.27% 00:10:43.552 nvme0n2: ios=68/512, merge=0/0, ticks=967/89, in_queue=1056, util=98.27% 00:10:43.552 nvme0n3: ios=2048/2533, merge=0/0, ticks=384/359, in_queue=743, util=88.89% 00:10:43.552 nvme0n4: ios=40/512, merge=0/0, ticks=1617/106, in_queue=1723, util=98.20% 00:10:43.552 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:43.552 [global] 00:10:43.552 thread=1 00:10:43.552 invalidate=1 00:10:43.552 rw=randwrite 00:10:43.552 time_based=1 00:10:43.552 runtime=1 00:10:43.552 ioengine=libaio 00:10:43.552 direct=1 00:10:43.552 bs=4096 00:10:43.552 iodepth=1 00:10:43.552 norandommap=0 00:10:43.552 numjobs=1 00:10:43.552 00:10:43.552 verify_dump=1 00:10:43.552 verify_backlog=512 00:10:43.552 verify_state_save=0 00:10:43.552 do_verify=1 00:10:43.552 verify=crc32c-intel 00:10:43.552 [job0] 00:10:43.552 filename=/dev/nvme0n1 00:10:43.552 [job1] 00:10:43.552 filename=/dev/nvme0n2 00:10:43.552 [job2] 00:10:43.552 filename=/dev/nvme0n3 00:10:43.552 [job3] 00:10:43.552 filename=/dev/nvme0n4 00:10:43.552 Could not set queue depth (nvme0n1) 00:10:43.552 Could not set queue depth (nvme0n2) 00:10:43.552 Could not set queue depth (nvme0n3) 00:10:43.552 Could not set queue depth (nvme0n4) 00:10:43.810 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.810 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.810 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.810 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.810 fio-3.35 00:10:43.810 Starting 4 threads 00:10:45.184 00:10:45.184 job0: (groupid=0, jobs=1): err= 0: pid=188410: Mon Dec 16 22:16:34 2024 00:10:45.184 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4148KiB/1042msec) 00:10:45.184 slat (nsec): min=7541, max=29849, avg=9496.63, stdev=2001.23 00:10:45.184 clat (usec): min=168, max=41995, avg=733.82, stdev=4575.19 00:10:45.184 lat (usec): min=177, max=42017, avg=743.32, stdev=4576.57 00:10:45.184 clat percentiles (usec): 00:10:45.184 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:10:45.184 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:10:45.184 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 379], 00:10:45.184 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:45.184 | 99.99th=[42206] 00:10:45.184 write: IOPS=1474, BW=5896KiB/s (6038kB/s)(6144KiB/1042msec); 0 zone resets 00:10:45.184 slat (nsec): min=9097, max=59902, avg=11134.86, stdev=2243.50 00:10:45.184 clat (usec): min=110, max=846, avg=159.22, stdev=39.83 00:10:45.184 lat (usec): min=119, max=857, avg=170.35, stdev=40.42 00:10:45.184 clat percentiles (usec): 00:10:45.184 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 135], 00:10:45.184 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:10:45.184 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 208], 95.00th=[ 227], 00:10:45.184 | 99.00th=[ 249], 99.50th=[ 281], 99.90th=[ 652], 99.95th=[ 848], 00:10:45.184 | 99.99th=[ 848] 00:10:45.184 bw ( KiB/s): min= 184, max=12104, per=39.08%, avg=6144.00, stdev=8428.71, samples=2 00:10:45.184 iops : min= 46, max= 3026, avg=1536.00, stdev=2107.18, samples=2 00:10:45.184 lat (usec) : 250=96.89%, 500=2.49%, 750=0.08%, 1000=0.04% 00:10:45.184 lat (msec) : 50=0.51% 00:10:45.184 cpu : usr=0.86%, sys=3.17%, ctx=2575, majf=0, minf=1 00:10:45.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.184 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.184 job1: (groupid=0, jobs=1): err= 0: pid=188412: Mon Dec 16 22:16:34 2024 00:10:45.184 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:10:45.184 slat (nsec): min=10148, max=34660, avg=22195.64, stdev=6920.38 00:10:45.184 clat (usec): min=40829, max=41982, avg=41151.17, stdev=390.44 00:10:45.184 lat (usec): min=40850, max=42017, avg=41173.37, stdev=394.74 00:10:45.184 clat percentiles (usec): 00:10:45.184 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:45.184 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:45.184 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:45.184 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:45.184 | 99.99th=[42206] 00:10:45.184 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:45.185 slat (nsec): min=9100, max=44990, avg=12033.24, stdev=2984.58 00:10:45.185 clat (usec): min=121, max=329, avg=214.75, stdev=32.09 00:10:45.185 lat (usec): min=131, max=341, avg=226.79, stdev=32.34 00:10:45.185 clat percentiles (usec): 00:10:45.185 | 1.00th=[ 133], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 188], 00:10:45.185 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 225], 00:10:45.185 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 262], 00:10:45.185 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 330], 99.95th=[ 330], 00:10:45.185 | 99.99th=[ 330] 00:10:45.185 bw ( KiB/s): min= 4096, max= 4096, per=26.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.185 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.185 lat (usec) : 250=85.96%, 500=9.93% 00:10:45.185 lat (msec) : 50=4.12% 00:10:45.185 cpu : usr=0.59%, sys=0.29%, ctx=535, majf=0, minf=2 00:10:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.185 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.185 job2: (groupid=0, jobs=1): err= 0: pid=188432: Mon Dec 16 22:16:34 2024 00:10:45.185 read: IOPS=515, BW=2063KiB/s (2112kB/s)(2112KiB/1024msec) 00:10:45.185 slat (nsec): min=6789, max=25249, avg=8851.65, stdev=2757.84 00:10:45.185 clat (usec): min=179, max=41965, avg=1480.31, stdev=7019.85 00:10:45.185 lat (usec): min=186, max=41989, avg=1489.16, stdev=7021.87 00:10:45.185 clat percentiles (usec): 00:10:45.185 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:10:45.185 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 239], 00:10:45.185 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 420], 00:10:45.185 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:45.185 | 99.99th=[42206] 00:10:45.185 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:10:45.185 slat (nsec): min=5181, max=44553, avg=11608.26, stdev=2316.92 00:10:45.185 clat (usec): min=130, max=486, avg=213.98, stdev=33.91 00:10:45.185 lat (usec): min=141, max=496, avg=225.59, stdev=33.43 00:10:45.185 clat percentiles (usec): 00:10:45.185 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 188], 00:10:45.185 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 231], 00:10:45.185 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 260], 00:10:45.185 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 347], 99.95th=[ 486], 00:10:45.185 | 99.99th=[ 486] 00:10:45.185 bw ( KiB/s): min= 8192, max= 8192, per=52.10%, avg=8192.00, stdev= 0.00, samples=1 00:10:45.185 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:45.185 lat (usec) : 250=84.21%, 500=14.69%, 750=0.06% 00:10:45.185 lat (msec) : 50=1.03% 00:10:45.185 cpu : usr=0.68%, sys=2.15%, ctx=1555, majf=0, minf=1 00:10:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.185 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.185 job3: (groupid=0, jobs=1): err= 0: pid=188443: Mon Dec 16 22:16:34 2024 00:10:45.185 read: IOPS=531, BW=2125KiB/s (2176kB/s)(2204KiB/1037msec) 00:10:45.185 slat (nsec): min=6778, max=24158, avg=8591.11, stdev=2843.13 00:10:45.185 clat (usec): min=186, max=42049, avg=1502.65, stdev=6982.13 00:10:45.185 lat (usec): min=194, max=42057, avg=1511.24, stdev=6982.58 00:10:45.185 clat percentiles (usec): 00:10:45.185 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:10:45.185 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 247], 60.00th=[ 293], 00:10:45.185 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 355], 00:10:45.185 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:45.185 | 99.99th=[42206] 00:10:45.185 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:10:45.185 slat (nsec): min=4877, max=37491, avg=10393.09, stdev=2176.65 00:10:45.185 clat (usec): min=125, max=367, avg=183.05, stdev=40.29 00:10:45.185 lat (usec): min=135, max=398, avg=193.45, stdev=40.57 00:10:45.185 clat percentiles (usec): 00:10:45.185 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:45.185 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 188], 00:10:45.185 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 249], 00:10:45.185 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 330], 99.95th=[ 367], 00:10:45.185 | 99.99th=[ 367] 00:10:45.185 bw ( KiB/s): min= 8192, max= 8192, per=52.10%, avg=8192.00, stdev= 0.00, samples=1 00:10:45.185 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:45.185 lat (usec) : 250=79.81%, 500=19.11% 00:10:45.185 lat (msec) : 50=1.08% 00:10:45.185 cpu : usr=0.58%, sys=1.64%, ctx=1576, majf=0, minf=1 00:10:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.185 issued rwts: total=551,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.185 00:10:45.185 Run status group 0 (all jobs): 00:10:45.185 READ: bw=8207KiB/s (8404kB/s), 85.9KiB/s-3981KiB/s (88.0kB/s-4076kB/s), io=8552KiB (8757kB), run=1024-1042msec 00:10:45.185 WRITE: bw=15.4MiB/s (16.1MB/s), 2000KiB/s-5896KiB/s (2048kB/s-6038kB/s), io=16.0MiB (16.8MB), run=1024-1042msec 00:10:45.185 00:10:45.185 Disk stats (read/write): 00:10:45.185 nvme0n1: ios=1084/1536, merge=0/0, ticks=759/233, in_queue=992, util=97.49% 00:10:45.185 nvme0n2: ios=16/512, merge=0/0, ticks=657/105, in_queue=762, util=83.14% 00:10:45.185 nvme0n3: ios=561/1024, merge=0/0, ticks=1232/210, in_queue=1442, util=96.32% 00:10:45.185 nvme0n4: ios=594/1024, merge=0/0, ticks=715/185, in_queue=900, util=97.80% 00:10:45.185 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:45.185 [global] 00:10:45.185 thread=1 00:10:45.185 invalidate=1 00:10:45.185 rw=write 00:10:45.185 time_based=1 00:10:45.185 runtime=1 00:10:45.185 ioengine=libaio 00:10:45.185 direct=1 00:10:45.185 bs=4096 00:10:45.185 iodepth=128 00:10:45.185 norandommap=0 00:10:45.185 numjobs=1 00:10:45.185 00:10:45.185 verify_dump=1 00:10:45.185 verify_backlog=512 00:10:45.185 verify_state_save=0 00:10:45.185 do_verify=1 00:10:45.185 verify=crc32c-intel 00:10:45.185 [job0] 00:10:45.185 filename=/dev/nvme0n1 00:10:45.185 [job1] 00:10:45.185 filename=/dev/nvme0n2 00:10:45.185 [job2] 00:10:45.185 filename=/dev/nvme0n3 00:10:45.185 [job3] 00:10:45.185 filename=/dev/nvme0n4 00:10:45.185 Could not set queue depth (nvme0n1) 00:10:45.185 Could not set queue depth (nvme0n2) 00:10:45.185 Could not set queue depth (nvme0n3) 00:10:45.185 Could not set queue depth (nvme0n4) 00:10:45.444 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.444 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.444 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.444 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.444 fio-3.35 00:10:45.444 Starting 4 threads 00:10:46.832 00:10:46.832 job0: (groupid=0, jobs=1): err= 0: pid=188902: Mon Dec 16 22:16:36 2024 00:10:46.832 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:10:46.832 slat (nsec): min=1826, max=13002k, avg=170435.31, stdev=1027868.77 00:10:46.832 clat (usec): min=8523, max=41591, avg=19238.42, stdev=4756.22 00:10:46.832 lat (usec): min=8532, max=41599, avg=19408.85, stdev=4864.84 00:10:46.832 clat percentiles (usec): 00:10:46.832 | 1.00th=[10290], 5.00th=[13435], 10.00th=[14484], 20.00th=[15139], 00:10:46.832 | 30.00th=[16450], 40.00th=[17957], 50.00th=[19268], 60.00th=[20317], 00:10:46.832 | 70.00th=[20317], 80.00th=[21627], 90.00th=[24249], 95.00th=[27132], 00:10:46.832 | 99.00th=[35914], 99.50th=[37487], 99.90th=[41681], 99.95th=[41681], 00:10:46.832 | 99.99th=[41681] 00:10:46.832 write: IOPS=2456, BW=9828KiB/s (10.1MB/s)(9916KiB/1009msec); 0 zone resets 00:10:46.832 slat (usec): min=2, max=27825, avg=256.05, stdev=1146.87 00:10:46.832 clat (usec): min=8588, max=95257, avg=34007.48, stdev=20705.99 00:10:46.832 lat (usec): min=10146, max=95270, avg=34263.53, stdev=20824.79 00:10:46.832 clat percentiles (usec): 00:10:46.833 | 1.00th=[10552], 5.00th=[11338], 10.00th=[14353], 20.00th=[16909], 00:10:46.833 | 30.00th=[22152], 40.00th=[24249], 50.00th=[25560], 60.00th=[31065], 00:10:46.833 | 70.00th=[38011], 80.00th=[51119], 90.00th=[69731], 95.00th=[83362], 00:10:46.833 | 99.00th=[91751], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:10:46.833 | 99.99th=[94897] 00:10:46.833 bw ( KiB/s): min= 8776, max=10040, per=14.91%, avg=9408.00, stdev=893.78, samples=2 00:10:46.833 iops : min= 2194, max= 2510, avg=2352.00, stdev=223.45, samples=2 00:10:46.833 lat (msec) : 10=0.33%, 20=39.10%, 50=49.59%, 100=10.98% 00:10:46.833 cpu : usr=1.98%, sys=2.88%, ctx=306, majf=0, minf=1 00:10:46.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:46.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.833 issued rwts: total=2048,2479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.833 job1: (groupid=0, jobs=1): err= 0: pid=188918: Mon Dec 16 22:16:36 2024 00:10:46.833 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:10:46.833 slat (nsec): min=1360, max=10024k, avg=99350.65, stdev=674180.42 00:10:46.833 clat (usec): min=4369, max=44370, avg=11752.71, stdev=4353.42 00:10:46.833 lat (usec): min=4376, max=44379, avg=11852.06, stdev=4426.93 00:10:46.833 clat percentiles (usec): 00:10:46.833 | 1.00th=[ 6390], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[ 9896], 00:10:46.833 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:10:46.833 | 70.00th=[10945], 80.00th=[11994], 90.00th=[16057], 95.00th=[20055], 00:10:46.833 | 99.00th=[31065], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:10:46.833 | 99.99th=[44303] 00:10:46.833 write: IOPS=4282, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1012msec); 0 zone resets 00:10:46.833 slat (usec): min=2, max=8485, avg=128.65, stdev=634.40 00:10:46.833 clat (usec): min=2785, max=58476, avg=18524.89, stdev=14468.75 00:10:46.833 lat (usec): min=2794, max=58488, avg=18653.54, stdev=14570.19 00:10:46.833 clat percentiles (usec): 00:10:46.833 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 7832], 20.00th=[ 8717], 00:10:46.833 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[10683], 60.00th=[11994], 00:10:46.833 | 70.00th=[22676], 80.00th=[27657], 90.00th=[44827], 95.00th=[52691], 00:10:46.833 | 99.00th=[57410], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:10:46.833 | 99.99th=[58459] 00:10:46.833 bw ( KiB/s): min=13176, max=20480, per=26.67%, avg=16828.00, stdev=5164.71, samples=2 00:10:46.833 iops : min= 3294, max= 5120, avg=4207.00, stdev=1291.18, samples=2 00:10:46.833 lat (msec) : 4=0.39%, 10=30.96%, 20=49.67%, 50=15.41%, 100=3.57% 00:10:46.833 cpu : usr=3.56%, sys=4.95%, ctx=421, majf=0, minf=1 00:10:46.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.833 issued rwts: total=4096,4334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.833 job2: (groupid=0, jobs=1): err= 0: pid=188937: Mon Dec 16 22:16:36 2024 00:10:46.833 read: IOPS=6370, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1004msec) 00:10:46.833 slat (nsec): min=1291, max=10599k, avg=86904.32, stdev=625421.07 00:10:46.833 clat (usec): min=1164, max=21870, avg=10506.31, stdev=2531.23 00:10:46.833 lat (usec): min=3149, max=21880, avg=10593.22, stdev=2576.98 00:10:46.833 clat percentiles (usec): 00:10:46.833 | 1.00th=[ 4047], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[ 9110], 00:10:46.833 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9896], 00:10:46.833 | 70.00th=[11338], 80.00th=[12125], 90.00th=[14353], 95.00th=[15795], 00:10:46.833 | 99.00th=[17957], 99.50th=[19268], 99.90th=[20841], 99.95th=[21627], 00:10:46.833 | 99.99th=[21890] 00:10:46.833 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:10:46.833 slat (usec): min=2, max=9279, avg=62.55, stdev=299.13 00:10:46.833 clat (usec): min=1520, max=21643, avg=9041.45, stdev=2150.84 00:10:46.833 lat (usec): min=1534, max=21646, avg=9104.00, stdev=2175.18 00:10:46.833 clat percentiles (usec): 00:10:46.833 | 1.00th=[ 2802], 5.00th=[ 4424], 10.00th=[ 5997], 20.00th=[ 7963], 00:10:46.833 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9503], 00:10:46.833 | 70.00th=[ 9634], 80.00th=[ 9634], 90.00th=[11600], 95.00th=[11994], 00:10:46.833 | 99.00th=[15139], 99.50th=[15270], 99.90th=[18220], 99.95th=[19268], 00:10:46.833 | 99.99th=[21627] 00:10:46.833 bw ( KiB/s): min=24624, max=28624, per=42.19%, avg=26624.00, stdev=2828.43, samples=2 00:10:46.833 iops : min= 6156, max= 7156, avg=6656.00, stdev=707.11, samples=2 00:10:46.833 lat (msec) : 2=0.11%, 4=2.37%, 10=69.08%, 20=28.26%, 50=0.18% 00:10:46.833 cpu : usr=4.49%, sys=5.38%, ctx=806, majf=0, minf=2 00:10:46.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:46.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.833 issued rwts: total=6396,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.833 job3: (groupid=0, jobs=1): err= 0: pid=188943: Mon Dec 16 22:16:36 2024 00:10:46.833 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:10:46.833 slat (nsec): min=1450, max=12676k, avg=179263.98, stdev=1060759.57 00:10:46.833 clat (usec): min=12360, max=49511, avg=21439.43, stdev=4716.08 00:10:46.833 lat (usec): min=12367, max=49526, avg=21618.69, stdev=4817.11 00:10:46.833 clat percentiles (usec): 00:10:46.833 | 1.00th=[15008], 5.00th=[15401], 10.00th=[16581], 20.00th=[17695], 00:10:46.833 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[21627], 00:10:46.833 | 70.00th=[23200], 80.00th=[23725], 90.00th=[25035], 95.00th=[28181], 00:10:46.833 | 99.00th=[38536], 99.50th=[38536], 99.90th=[42206], 99.95th=[42206], 00:10:46.833 | 99.99th=[49546] 00:10:46.833 write: IOPS=2472, BW=9891KiB/s (10.1MB/s)(9980KiB/1009msec); 0 zone resets 00:10:46.833 slat (usec): min=2, max=9863, avg=247.13, stdev=991.93 00:10:46.833 clat (usec): min=6568, max=95209, avg=33501.62, stdev=20724.90 00:10:46.833 lat (usec): min=6600, max=95220, avg=33748.75, stdev=20843.84 00:10:46.833 clat percentiles (usec): 00:10:46.833 | 1.00th=[11076], 5.00th=[11076], 10.00th=[11338], 20.00th=[17957], 00:10:46.833 | 30.00th=[22414], 40.00th=[24511], 50.00th=[25297], 60.00th=[28443], 00:10:46.833 | 70.00th=[37487], 80.00th=[49021], 90.00th=[65799], 95.00th=[83362], 00:10:46.833 | 99.00th=[91751], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:10:46.833 | 99.99th=[94897] 00:10:46.833 bw ( KiB/s): min= 8584, max=10360, per=15.01%, avg=9472.00, stdev=1255.82, samples=2 00:10:46.833 iops : min= 2146, max= 2590, avg=2368.00, stdev=313.96, samples=2 00:10:46.833 lat (msec) : 10=0.11%, 20=29.17%, 50=60.14%, 100=10.59% 00:10:46.833 cpu : usr=2.18%, sys=2.98%, ctx=305, majf=0, minf=1 00:10:46.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:46.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.833 issued rwts: total=2048,2495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.833 00:10:46.833 Run status group 0 (all jobs): 00:10:46.833 READ: bw=56.3MiB/s (59.0MB/s), 8119KiB/s-24.9MiB/s (8314kB/s-26.1MB/s), io=57.0MiB (59.8MB), run=1004-1012msec 00:10:46.833 WRITE: bw=61.6MiB/s (64.6MB/s), 9828KiB/s-25.9MiB/s (10.1MB/s-27.2MB/s), io=62.4MiB (65.4MB), run=1004-1012msec 00:10:46.833 00:10:46.833 Disk stats (read/write): 00:10:46.833 nvme0n1: ios=2092/2095, merge=0/0, ticks=20156/30625, in_queue=50781, util=98.30% 00:10:46.833 nvme0n2: ios=3120/3567, merge=0/0, ticks=34434/69356, in_queue=103790, util=95.33% 00:10:46.833 nvme0n3: ios=5364/5632, merge=0/0, ticks=54584/50324, in_queue=104908, util=88.96% 00:10:46.833 nvme0n4: ios=2098/2111, merge=0/0, ticks=22228/30522, in_queue=52750, util=98.53% 00:10:46.833 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:46.833 [global] 00:10:46.833 thread=1 00:10:46.833 invalidate=1 00:10:46.833 rw=randwrite 00:10:46.833 time_based=1 00:10:46.833 runtime=1 00:10:46.833 ioengine=libaio 00:10:46.833 direct=1 00:10:46.833 bs=4096 00:10:46.833 iodepth=128 00:10:46.833 norandommap=0 00:10:46.833 numjobs=1 00:10:46.833 00:10:46.833 verify_dump=1 00:10:46.833 verify_backlog=512 00:10:46.833 verify_state_save=0 00:10:46.833 do_verify=1 00:10:46.833 verify=crc32c-intel 00:10:46.833 [job0] 00:10:46.833 filename=/dev/nvme0n1 00:10:46.833 [job1] 00:10:46.833 filename=/dev/nvme0n2 00:10:46.833 [job2] 00:10:46.833 filename=/dev/nvme0n3 00:10:46.833 [job3] 00:10:46.833 filename=/dev/nvme0n4 00:10:46.833 Could not set queue depth (nvme0n1) 00:10:46.833 Could not set queue depth (nvme0n2) 00:10:46.833 Could not set queue depth (nvme0n3) 00:10:46.833 Could not set queue depth (nvme0n4) 00:10:47.091 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.091 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.091 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.091 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.091 fio-3.35 00:10:47.091 Starting 4 threads 00:10:48.465 00:10:48.466 job0: (groupid=0, jobs=1): err= 0: pid=189347: Mon Dec 16 22:16:37 2024 00:10:48.466 read: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(10.4MiB/1013msec) 00:10:48.466 slat (nsec): min=1468, max=13516k, avg=156865.16, stdev=1018118.56 00:10:48.466 clat (usec): min=6613, max=52407, avg=17725.24, stdev=7914.62 00:10:48.466 lat (usec): min=6620, max=52416, avg=17882.10, stdev=8010.82 00:10:48.466 clat percentiles (usec): 00:10:48.466 | 1.00th=[ 7635], 5.00th=[10159], 10.00th=[10814], 20.00th=[12780], 00:10:48.466 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15664], 60.00th=[17171], 00:10:48.466 | 70.00th=[17695], 80.00th=[19792], 90.00th=[28443], 95.00th=[36439], 00:10:48.466 | 99.00th=[47973], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:10:48.466 | 99.99th=[52167] 00:10:48.466 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec); 0 zone resets 00:10:48.466 slat (usec): min=2, max=17232, avg=181.37, stdev=907.30 00:10:48.466 clat (usec): min=3676, max=76102, avg=26340.37, stdev=13335.47 00:10:48.466 lat (usec): min=3686, max=76113, avg=26521.74, stdev=13409.04 00:10:48.466 clat percentiles (usec): 00:10:48.466 | 1.00th=[ 6390], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[15664], 00:10:48.466 | 30.00th=[19006], 40.00th=[22938], 50.00th=[25035], 60.00th=[25560], 00:10:48.466 | 70.00th=[28443], 80.00th=[39060], 90.00th=[43779], 95.00th=[48497], 00:10:48.466 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:10:48.466 | 99.99th=[76022] 00:10:48.466 bw ( KiB/s): min=11376, max=12928, per=20.04%, avg=12152.00, stdev=1097.43, samples=2 00:10:48.466 iops : min= 2844, max= 3232, avg=3038.00, stdev=274.36, samples=2 00:10:48.466 lat (msec) : 4=0.10%, 10=6.92%, 20=50.31%, 50=39.94%, 100=2.72% 00:10:48.466 cpu : usr=2.27%, sys=3.66%, ctx=303, majf=0, minf=1 00:10:48.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:48.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.466 issued rwts: total=2654,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.466 job1: (groupid=0, jobs=1): err= 0: pid=189350: Mon Dec 16 22:16:37 2024 00:10:48.466 read: IOPS=2651, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1013msec) 00:10:48.466 slat (nsec): min=1313, max=27338k, avg=187019.93, stdev=1349011.94 00:10:48.466 clat (usec): min=4931, max=87560, avg=21628.48, stdev=14051.44 00:10:48.466 lat (usec): min=6162, max=87564, avg=21815.50, stdev=14193.72 00:10:48.466 clat percentiles (usec): 00:10:48.466 | 1.00th=[ 7701], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10683], 00:10:48.466 | 30.00th=[11207], 40.00th=[16581], 50.00th=[17433], 60.00th=[19530], 00:10:48.466 | 70.00th=[22414], 80.00th=[28705], 90.00th=[42206], 95.00th=[47449], 00:10:48.466 | 99.00th=[80217], 99.50th=[81265], 99.90th=[87557], 99.95th=[87557], 00:10:48.466 | 99.99th=[87557] 00:10:48.466 write: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec); 0 zone resets 00:10:48.466 slat (usec): min=2, max=26799, avg=149.05, stdev=925.46 00:10:48.466 clat (usec): min=4703, max=87558, avg=22654.81, stdev=13542.10 00:10:48.466 lat (usec): min=4718, max=87563, avg=22803.86, stdev=13606.90 00:10:48.466 clat percentiles (usec): 00:10:48.466 | 1.00th=[ 6652], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[10028], 00:10:48.466 | 30.00th=[15401], 40.00th=[15926], 50.00th=[23725], 60.00th=[25035], 00:10:48.466 | 70.00th=[25560], 80.00th=[27919], 90.00th=[37487], 95.00th=[50070], 00:10:48.466 | 99.00th=[72877], 99.50th=[74974], 99.90th=[76022], 99.95th=[87557], 00:10:48.466 | 99.99th=[87557] 00:10:48.466 bw ( KiB/s): min= 9856, max=14704, per=20.25%, avg=12280.00, stdev=3428.05, samples=2 00:10:48.466 iops : min= 2464, max= 3676, avg=3070.00, stdev=857.01, samples=2 00:10:48.466 lat (msec) : 10=13.16%, 20=40.59%, 50=41.40%, 100=4.85% 00:10:48.466 cpu : usr=1.88%, sys=4.05%, ctx=264, majf=0, minf=1 00:10:48.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:48.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.466 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.466 job2: (groupid=0, jobs=1): err= 0: pid=189351: Mon Dec 16 22:16:37 2024 00:10:48.466 read: IOPS=6475, BW=25.3MiB/s (26.5MB/s)(25.5MiB/1007msec) 00:10:48.466 slat (nsec): min=1444, max=10557k, avg=83938.89, stdev=615206.94 00:10:48.466 clat (usec): min=3214, max=30464, avg=10531.47, stdev=3128.12 00:10:48.466 lat (usec): min=3224, max=30472, avg=10615.41, stdev=3175.65 00:10:48.466 clat percentiles (usec): 00:10:48.466 | 1.00th=[ 4293], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 8848], 00:10:48.466 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10028], 00:10:48.466 | 70.00th=[10945], 80.00th=[12518], 90.00th=[13829], 95.00th=[15795], 00:10:48.466 | 99.00th=[26346], 99.50th=[28443], 99.90th=[30540], 99.95th=[30540], 00:10:48.466 | 99.99th=[30540] 00:10:48.466 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:10:48.466 slat (usec): min=2, max=8846, avg=62.79, stdev=378.45 00:10:48.466 clat (usec): min=1425, max=30980, avg=8876.40, stdev=2407.84 00:10:48.466 lat (usec): min=1441, max=30986, avg=8939.19, stdev=2436.96 00:10:48.466 clat percentiles (usec): 00:10:48.466 | 1.00th=[ 2769], 5.00th=[ 4424], 10.00th=[ 6128], 20.00th=[ 7898], 00:10:48.466 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:10:48.466 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[12518], 00:10:48.466 | 99.00th=[17171], 99.50th=[17957], 99.90th=[31065], 99.95th=[31065], 00:10:48.466 | 99.99th=[31065] 00:10:48.466 bw ( KiB/s): min=24816, max=28432, per=43.90%, avg=26624.00, stdev=2556.90, samples=2 00:10:48.466 iops : min= 6204, max= 7108, avg=6656.00, stdev=639.22, samples=2 00:10:48.466 lat (msec) : 2=0.30%, 4=2.12%, 10=73.52%, 20=23.18%, 50=0.87% 00:10:48.466 cpu : usr=4.67%, sys=6.96%, ctx=692, majf=0, minf=1 00:10:48.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:48.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.466 issued rwts: total=6521,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.466 job3: (groupid=0, jobs=1): err= 0: pid=189352: Mon Dec 16 22:16:37 2024 00:10:48.466 read: IOPS=2154, BW=8618KiB/s (8825kB/s)(8696KiB/1009msec) 00:10:48.466 slat (nsec): min=1730, max=36343k, avg=170621.73, stdev=1338067.75 00:10:48.466 clat (msec): min=3, max=120, avg=24.40, stdev=18.99 00:10:48.466 lat (msec): min=8, max=138, avg=24.57, stdev=19.12 00:10:48.466 clat percentiles (msec): 00:10:48.466 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:10:48.466 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 18], 60.00th=[ 22], 00:10:48.466 | 70.00th=[ 27], 80.00th=[ 29], 90.00th=[ 45], 95.00th=[ 70], 00:10:48.466 | 99.00th=[ 104], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 121], 00:10:48.466 | 99.99th=[ 122] 00:10:48.466 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:10:48.466 slat (usec): min=2, max=20176, avg=239.77, stdev=1189.32 00:10:48.466 clat (msec): min=7, max=157, avg=28.95, stdev=27.75 00:10:48.466 lat (msec): min=7, max=166, avg=29.19, stdev=27.95 00:10:48.466 clat percentiles (msec): 00:10:48.466 | 1.00th=[ 11], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:10:48.466 | 30.00th=[ 12], 40.00th=[ 17], 50.00th=[ 26], 60.00th=[ 26], 00:10:48.466 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 58], 95.00th=[ 103], 00:10:48.466 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 159], 00:10:48.466 | 99.99th=[ 159] 00:10:48.466 bw ( KiB/s): min= 8824, max=11640, per=16.87%, avg=10232.00, stdev=1991.21, samples=2 00:10:48.466 iops : min= 2206, max= 2910, avg=2558.00, stdev=497.80, samples=2 00:10:48.466 lat (msec) : 4=0.02%, 10=0.95%, 20=47.66%, 50=39.97%, 100=8.05% 00:10:48.466 lat (msec) : 250=3.36% 00:10:48.466 cpu : usr=1.88%, sys=3.67%, ctx=270, majf=0, minf=1 00:10:48.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:48.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.466 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.466 00:10:48.466 Run status group 0 (all jobs): 00:10:48.466 READ: bw=54.1MiB/s (56.7MB/s), 8618KiB/s-25.3MiB/s (8825kB/s-26.5MB/s), io=54.8MiB (57.5MB), run=1007-1013msec 00:10:48.466 WRITE: bw=59.2MiB/s (62.1MB/s), 9.91MiB/s-25.8MiB/s (10.4MB/s-27.1MB/s), io=60.0MiB (62.9MB), run=1007-1013msec 00:10:48.466 00:10:48.466 Disk stats (read/write): 00:10:48.466 nvme0n1: ios=2100/2560, merge=0/0, ticks=36818/67674, in_queue=104492, util=97.60% 00:10:48.466 nvme0n2: ios=2324/2560, merge=0/0, ticks=43517/47161, in_queue=90678, util=99.80% 00:10:48.466 nvme0n3: ios=5372/5632, merge=0/0, ticks=55078/49112, in_queue=104190, util=88.85% 00:10:48.466 nvme0n4: ios=2084/2048, merge=0/0, ticks=22061/30119, in_queue=52180, util=95.90% 00:10:48.466 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:48.466 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=189525 00:10:48.466 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:48.466 22:16:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:48.466 [global] 00:10:48.466 thread=1 00:10:48.466 invalidate=1 00:10:48.466 rw=read 00:10:48.466 time_based=1 00:10:48.466 runtime=10 00:10:48.466 ioengine=libaio 00:10:48.466 direct=1 00:10:48.466 bs=4096 00:10:48.466 iodepth=1 00:10:48.466 norandommap=1 00:10:48.466 numjobs=1 00:10:48.466 00:10:48.466 [job0] 00:10:48.466 filename=/dev/nvme0n1 00:10:48.466 [job1] 00:10:48.466 filename=/dev/nvme0n2 00:10:48.467 [job2] 00:10:48.467 filename=/dev/nvme0n3 00:10:48.467 [job3] 00:10:48.467 filename=/dev/nvme0n4 00:10:48.467 Could not set queue depth (nvme0n1) 00:10:48.467 Could not set queue depth (nvme0n2) 00:10:48.467 Could not set queue depth (nvme0n3) 00:10:48.467 Could not set queue depth (nvme0n4) 00:10:48.467 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.467 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.467 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.467 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.467 fio-3.35 00:10:48.467 Starting 4 threads 00:10:51.751 22:16:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:51.751 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45899776, buflen=4096 00:10:51.751 fio: pid=189722, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:51.751 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:51.751 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=50327552, buflen=4096 00:10:51.751 fio: pid=189721, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:51.751 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.751 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:51.751 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3477504, buflen=4096 00:10:51.751 fio: pid=189719, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:51.751 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.751 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:52.009 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52121600, buflen=4096 00:10:52.009 fio: pid=189720, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.009 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.009 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:52.009 00:10:52.009 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189719: Mon Dec 16 22:16:41 2024 00:10:52.009 read: IOPS=272, BW=1088KiB/s (1114kB/s)(3396KiB/3122msec) 00:10:52.009 slat (usec): min=6, max=12397, avg=35.89, stdev=520.29 00:10:52.009 clat (usec): min=201, max=46496, avg=3610.17, stdev=11203.83 00:10:52.009 lat (usec): min=209, max=46512, avg=3646.08, stdev=11210.65 00:10:52.009 clat percentiles (usec): 00:10:52.010 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:10:52.010 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:52.010 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 420], 95.00th=[41157], 00:10:52.010 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:10:52.010 | 99.99th=[46400] 00:10:52.010 bw ( KiB/s): min= 104, max= 4928, per=2.53%, avg=1124.00, stdev=1918.71, samples=6 00:10:52.010 iops : min= 26, max= 1232, avg=281.00, stdev=479.68, samples=6 00:10:52.010 lat (usec) : 250=9.18%, 500=82.47%, 750=0.12% 00:10:52.010 lat (msec) : 50=8.12% 00:10:52.010 cpu : usr=0.03%, sys=0.32%, ctx=856, majf=0, minf=1 00:10:52.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 issued rwts: total=850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.010 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189720: Mon Dec 16 22:16:41 2024 00:10:52.010 read: IOPS=3819, BW=14.9MiB/s (15.6MB/s)(49.7MiB/3332msec) 00:10:52.010 slat (usec): min=6, max=15606, avg=13.80, stdev=286.09 00:10:52.010 clat (usec): min=155, max=22022, avg=244.11, stdev=194.41 00:10:52.010 lat (usec): min=162, max=22030, avg=257.91, stdev=348.75 00:10:52.010 clat percentiles (usec): 00:10:52.010 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 212], 20.00th=[ 235], 00:10:52.010 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:52.010 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:10:52.010 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 392], 99.95th=[ 429], 00:10:52.010 | 99.99th=[ 515] 00:10:52.010 bw ( KiB/s): min=15011, max=15520, per=34.67%, avg=15427.17, stdev=204.47, samples=6 00:10:52.010 iops : min= 3752, max= 3880, avg=3856.67, stdev=51.42, samples=6 00:10:52.010 lat (usec) : 250=60.18%, 500=39.79%, 750=0.02% 00:10:52.010 lat (msec) : 50=0.01% 00:10:52.010 cpu : usr=2.16%, sys=5.91%, ctx=12732, majf=0, minf=2 00:10:52.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 issued rwts: total=12726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.010 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189721: Mon Dec 16 22:16:41 2024 00:10:52.010 read: IOPS=4227, BW=16.5MiB/s (17.3MB/s)(48.0MiB/2907msec) 00:10:52.010 slat (nsec): min=6272, max=34703, avg=7207.27, stdev=1138.68 00:10:52.010 clat (usec): min=164, max=41109, avg=226.53, stdev=369.56 00:10:52.010 lat (usec): min=171, max=41116, avg=233.73, stdev=369.56 00:10:52.010 clat percentiles (usec): 00:10:52.010 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:52.010 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:52.010 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 251], 00:10:52.010 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 330], 99.95th=[ 379], 00:10:52.010 | 99.99th=[ 1385] 00:10:52.010 bw ( KiB/s): min=15496, max=18176, per=38.03%, avg=16921.60, stdev=982.85, samples=5 00:10:52.010 iops : min= 3874, max= 4544, avg=4230.40, stdev=245.71, samples=5 00:10:52.010 lat (usec) : 250=94.83%, 500=5.13%, 1000=0.01% 00:10:52.010 lat (msec) : 2=0.02%, 50=0.01% 00:10:52.010 cpu : usr=1.17%, sys=3.72%, ctx=12290, majf=0, minf=2 00:10:52.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 issued rwts: total=12288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.010 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189722: Mon Dec 16 22:16:41 2024 00:10:52.010 read: IOPS=4152, BW=16.2MiB/s (17.0MB/s)(43.8MiB/2699msec) 00:10:52.010 slat (nsec): min=6426, max=37665, avg=8175.94, stdev=1328.95 00:10:52.010 clat (usec): min=183, max=2821, avg=229.38, stdev=38.16 00:10:52.010 lat (usec): min=190, max=2829, avg=237.55, stdev=38.22 00:10:52.010 clat percentiles (usec): 00:10:52.010 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:10:52.010 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:10:52.010 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 281], 00:10:52.010 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 412], 99.95th=[ 474], 00:10:52.010 | 99.99th=[ 660] 00:10:52.010 bw ( KiB/s): min=16344, max=17216, per=37.90%, avg=16867.20, stdev=346.99, samples=5 00:10:52.010 iops : min= 4086, max= 4304, avg=4216.80, stdev=86.75, samples=5 00:10:52.010 lat (usec) : 250=86.13%, 500=13.82%, 750=0.03% 00:10:52.010 lat (msec) : 4=0.01% 00:10:52.010 cpu : usr=1.37%, sys=4.00%, ctx=11208, majf=0, minf=2 00:10:52.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.010 issued rwts: total=11207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.010 00:10:52.010 Run status group 0 (all jobs): 00:10:52.010 READ: bw=43.5MiB/s (45.6MB/s), 1088KiB/s-16.5MiB/s (1114kB/s-17.3MB/s), io=145MiB (152MB), run=2699-3332msec 00:10:52.010 00:10:52.010 Disk stats (read/write): 00:10:52.010 nvme0n1: ios=871/0, merge=0/0, ticks=3234/0, in_queue=3234, util=98.43% 00:10:52.010 nvme0n2: ios=11897/0, merge=0/0, ticks=2786/0, in_queue=2786, util=94.53% 00:10:52.010 nvme0n3: ios=12074/0, merge=0/0, ticks=2675/0, in_queue=2675, util=96.50% 00:10:52.010 nvme0n4: ios=10839/0, merge=0/0, ticks=2415/0, in_queue=2415, util=96.46% 00:10:52.268 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.268 22:16:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:52.527 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.527 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:52.785 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.785 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:52.785 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.785 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:53.043 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:53.043 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 189525 00:10:53.043 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:53.043 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:53.302 nvmf hotplug test: fio failed as expected 00:10:53.302 22:16:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:53.560 rmmod nvme_tcp 00:10:53.560 rmmod nvme_fabrics 00:10:53.560 rmmod nvme_keyring 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 186709 ']' 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 186709 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 186709 ']' 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 186709 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186709 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186709' 00:10:53.560 killing process with pid 186709 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 186709 00:10:53.560 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 186709 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.820 22:16:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:55.726 00:10:55.726 real 0m26.920s 00:10:55.726 user 1m46.427s 00:10:55.726 sys 0m8.652s 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.726 ************************************ 00:10:55.726 END TEST nvmf_fio_target 00:10:55.726 ************************************ 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.726 22:16:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.986 ************************************ 00:10:55.986 START TEST nvmf_bdevio 00:10:55.986 ************************************ 00:10:55.986 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.986 * Looking for test storage... 00:10:55.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.987 --rc genhtml_branch_coverage=1 00:10:55.987 --rc genhtml_function_coverage=1 00:10:55.987 --rc genhtml_legend=1 00:10:55.987 --rc geninfo_all_blocks=1 00:10:55.987 --rc geninfo_unexecuted_blocks=1 00:10:55.987 00:10:55.987 ' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.987 --rc genhtml_branch_coverage=1 00:10:55.987 --rc genhtml_function_coverage=1 00:10:55.987 --rc genhtml_legend=1 00:10:55.987 --rc geninfo_all_blocks=1 00:10:55.987 --rc geninfo_unexecuted_blocks=1 00:10:55.987 00:10:55.987 ' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.987 --rc genhtml_branch_coverage=1 00:10:55.987 --rc genhtml_function_coverage=1 00:10:55.987 --rc genhtml_legend=1 00:10:55.987 --rc geninfo_all_blocks=1 00:10:55.987 --rc geninfo_unexecuted_blocks=1 00:10:55.987 00:10:55.987 ' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.987 --rc genhtml_branch_coverage=1 00:10:55.987 --rc genhtml_function_coverage=1 00:10:55.987 --rc genhtml_legend=1 00:10:55.987 --rc geninfo_all_blocks=1 00:10:55.987 --rc geninfo_unexecuted_blocks=1 00:10:55.987 00:10:55.987 ' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.987 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.988 22:16:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:02.561 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:02.561 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.561 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:02.562 Found net devices under 0000:af:00.0: cvl_0_0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:02.562 Found net devices under 0000:af:00.1: cvl_0_1 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:11:02.562 00:11:02.562 --- 10.0.0.2 ping statistics --- 00:11:02.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.562 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:11:02.562 00:11:02.562 --- 10.0.0.1 ping statistics --- 00:11:02.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.562 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=193971 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 193971 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 193971 ']' 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 [2024-12-16 22:16:51.393852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:02.562 [2024-12-16 22:16:51.393895] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.562 [2024-12-16 22:16:51.474496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.562 [2024-12-16 22:16:51.496995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.562 [2024-12-16 22:16:51.497031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.562 [2024-12-16 22:16:51.497039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.562 [2024-12-16 22:16:51.497045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.562 [2024-12-16 22:16:51.497050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.562 [2024-12-16 22:16:51.498367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:02.562 [2024-12-16 22:16:51.498462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:02.562 [2024-12-16 22:16:51.498575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.562 [2024-12-16 22:16:51.498575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 [2024-12-16 22:16:51.641497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 Malloc0 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.563 [2024-12-16 22:16:51.702832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:02.563 { 00:11:02.563 "params": { 00:11:02.563 "name": "Nvme$subsystem", 00:11:02.563 "trtype": "$TEST_TRANSPORT", 00:11:02.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:02.563 "adrfam": "ipv4", 00:11:02.563 "trsvcid": "$NVMF_PORT", 00:11:02.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:02.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:02.563 "hdgst": ${hdgst:-false}, 00:11:02.563 "ddgst": ${ddgst:-false} 00:11:02.563 }, 00:11:02.563 "method": "bdev_nvme_attach_controller" 00:11:02.563 } 00:11:02.563 EOF 00:11:02.563 )") 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:02.563 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:02.563 "params": { 00:11:02.563 "name": "Nvme1", 00:11:02.563 "trtype": "tcp", 00:11:02.563 "traddr": "10.0.0.2", 00:11:02.563 "adrfam": "ipv4", 00:11:02.563 "trsvcid": "4420", 00:11:02.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:02.563 "hdgst": false, 00:11:02.563 "ddgst": false 00:11:02.563 }, 00:11:02.563 "method": "bdev_nvme_attach_controller" 00:11:02.563 }' 00:11:02.563 [2024-12-16 22:16:51.752146] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:02.563 [2024-12-16 22:16:51.752187] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194143 ] 00:11:02.563 [2024-12-16 22:16:51.811426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.563 [2024-12-16 22:16:51.839209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.563 [2024-12-16 22:16:51.839245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.563 [2024-12-16 22:16:51.839245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.563 I/O targets: 00:11:02.563 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:02.563 00:11:02.563 00:11:02.563 CUnit - A unit testing framework for C - Version 2.1-3 00:11:02.563 http://cunit.sourceforge.net/ 00:11:02.563 00:11:02.563 00:11:02.563 Suite: bdevio tests on: Nvme1n1 00:11:02.563 Test: blockdev write read block ...passed 00:11:02.563 Test: blockdev write zeroes read block ...passed 00:11:02.563 Test: blockdev write zeroes read no split ...passed 00:11:02.563 Test: blockdev write zeroes read split ...passed 00:11:02.563 Test: blockdev write zeroes read split partial ...passed 00:11:02.563 Test: blockdev reset ...[2024-12-16 22:16:52.108541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:02.563 [2024-12-16 22:16:52.108602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1011630 (9): Bad file descriptor 00:11:02.563 [2024-12-16 22:16:52.120240] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:02.563 passed 00:11:02.563 Test: blockdev write read 8 blocks ...passed 00:11:02.563 Test: blockdev write read size > 128k ...passed 00:11:02.563 Test: blockdev write read invalid size ...passed 00:11:02.563 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:02.563 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:02.563 Test: blockdev write read max offset ...passed 00:11:02.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:02.822 Test: blockdev writev readv 8 blocks ...passed 00:11:02.822 Test: blockdev writev readv 30 x 1block ...passed 00:11:02.822 Test: blockdev writev readv block ...passed 00:11:02.822 Test: blockdev writev readv size > 128k ...passed 00:11:02.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:02.822 Test: blockdev comparev and writev ...[2024-12-16 22:16:52.376876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.376911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.376926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.376934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.377167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.377178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.377189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.377201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.377434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.377445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.377456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.377463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.377688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.377702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.377714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:02.822 [2024-12-16 22:16:52.377720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:02.822 passed 00:11:02.822 Test: blockdev nvme passthru rw ...passed 00:11:02.822 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:16:52.460556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.822 [2024-12-16 22:16:52.460578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.460687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.822 [2024-12-16 22:16:52.460697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.460792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.822 [2024-12-16 22:16:52.460802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:02.822 [2024-12-16 22:16:52.460899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:02.822 [2024-12-16 22:16:52.460909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:02.822 passed 00:11:02.822 Test: blockdev nvme admin passthru ...passed 00:11:02.822 Test: blockdev copy ...passed 00:11:02.822 00:11:02.822 Run Summary: Type Total Ran Passed Failed Inactive 00:11:02.822 suites 1 1 n/a 0 0 00:11:02.822 tests 23 23 23 0 0 00:11:02.822 asserts 152 152 152 0 n/a 00:11:02.822 00:11:02.822 Elapsed time = 1.068 seconds 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.081 rmmod nvme_tcp 00:11:03.081 rmmod nvme_fabrics 00:11:03.081 rmmod nvme_keyring 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 193971 ']' 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 193971 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 193971 ']' 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 193971 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193971 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193971' 00:11:03.081 killing process with pid 193971 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 193971 00:11:03.081 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 193971 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.340 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.878 00:11:05.878 real 0m9.575s 00:11:05.878 user 0m9.289s 00:11:05.878 sys 0m4.767s 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.878 ************************************ 00:11:05.878 END TEST nvmf_bdevio 00:11:05.878 ************************************ 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:05.878 00:11:05.878 real 4m34.515s 00:11:05.878 user 10m27.308s 00:11:05.878 sys 1m36.697s 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.878 ************************************ 00:11:05.878 END TEST nvmf_target_core 00:11:05.878 ************************************ 00:11:05.878 22:16:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.878 22:16:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.878 22:16:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.878 22:16:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.878 ************************************ 00:11:05.878 START TEST nvmf_target_extra 00:11:05.878 ************************************ 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.878 * Looking for test storage... 00:11:05.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.878 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.879 --rc genhtml_branch_coverage=1 00:11:05.879 --rc genhtml_function_coverage=1 00:11:05.879 --rc genhtml_legend=1 00:11:05.879 --rc geninfo_all_blocks=1 00:11:05.879 --rc geninfo_unexecuted_blocks=1 00:11:05.879 00:11:05.879 ' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.879 --rc genhtml_branch_coverage=1 00:11:05.879 --rc genhtml_function_coverage=1 00:11:05.879 --rc genhtml_legend=1 00:11:05.879 --rc geninfo_all_blocks=1 00:11:05.879 --rc geninfo_unexecuted_blocks=1 00:11:05.879 00:11:05.879 ' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.879 --rc genhtml_branch_coverage=1 00:11:05.879 --rc genhtml_function_coverage=1 00:11:05.879 --rc genhtml_legend=1 00:11:05.879 --rc geninfo_all_blocks=1 00:11:05.879 --rc geninfo_unexecuted_blocks=1 00:11:05.879 00:11:05.879 ' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.879 --rc genhtml_branch_coverage=1 00:11:05.879 --rc genhtml_function_coverage=1 00:11:05.879 --rc genhtml_legend=1 00:11:05.879 --rc geninfo_all_blocks=1 00:11:05.879 --rc geninfo_unexecuted_blocks=1 00:11:05.879 00:11:05.879 ' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.879 ************************************ 00:11:05.879 START TEST nvmf_example 00:11:05.879 ************************************ 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.879 * Looking for test storage... 00:11:05.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.879 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.880 --rc genhtml_branch_coverage=1 00:11:05.880 --rc genhtml_function_coverage=1 00:11:05.880 --rc genhtml_legend=1 00:11:05.880 --rc geninfo_all_blocks=1 00:11:05.880 --rc geninfo_unexecuted_blocks=1 00:11:05.880 00:11:05.880 ' 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.880 --rc genhtml_branch_coverage=1 00:11:05.880 --rc genhtml_function_coverage=1 00:11:05.880 --rc genhtml_legend=1 00:11:05.880 --rc geninfo_all_blocks=1 00:11:05.880 --rc geninfo_unexecuted_blocks=1 00:11:05.880 00:11:05.880 ' 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.880 --rc genhtml_branch_coverage=1 00:11:05.880 --rc genhtml_function_coverage=1 00:11:05.880 --rc genhtml_legend=1 00:11:05.880 --rc geninfo_all_blocks=1 00:11:05.880 --rc geninfo_unexecuted_blocks=1 00:11:05.880 00:11:05.880 ' 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.880 --rc genhtml_branch_coverage=1 00:11:05.880 --rc genhtml_function_coverage=1 00:11:05.880 --rc genhtml_legend=1 00:11:05.880 --rc geninfo_all_blocks=1 00:11:05.880 --rc geninfo_unexecuted_blocks=1 00:11:05.880 00:11:05.880 ' 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.880 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.140 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:11.419 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:11.419 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.419 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:11.420 Found net devices under 0000:af:00.0: cvl_0_0 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:11.420 Found net devices under 0000:af:00.1: cvl_0_1 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.420 22:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.420 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.420 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.420 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.420 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:11:11.679 00:11:11.679 --- 10.0.0.2 ping statistics --- 00:11:11.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.679 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:11.679 00:11:11.679 --- 10.0.0.1 ping statistics --- 00:11:11.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.679 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=197899 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 197899 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 197899 ']' 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.679 22:17:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.615 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.615 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:12.615 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:12.615 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.615 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:12.616 22:17:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:24.823 Initializing NVMe Controllers 00:11:24.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:24.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:24.823 Initialization complete. Launching workers. 00:11:24.823 ======================================================== 00:11:24.823 Latency(us) 00:11:24.823 Device Information : IOPS MiB/s Average min max 00:11:24.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18698.49 73.04 3422.25 656.80 17260.78 00:11:24.823 ======================================================== 00:11:24.823 Total : 18698.49 73.04 3422.25 656.80 17260.78 00:11:24.823 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.823 rmmod nvme_tcp 00:11:24.823 rmmod nvme_fabrics 00:11:24.823 rmmod nvme_keyring 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 197899 ']' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 197899 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 197899 ']' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 197899 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197899 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197899' 00:11:24.823 killing process with pid 197899 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 197899 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 197899 00:11:24.823 nvmf threads initialize successfully 00:11:24.823 bdev subsystem init successfully 00:11:24.823 created a nvmf target service 00:11:24.823 create targets's poll groups done 00:11:24.823 all subsystems of target started 00:11:24.823 nvmf target is running 00:11:24.823 all subsystems of target stopped 00:11:24.823 destroy targets's poll groups done 00:11:24.823 destroyed the nvmf target service 00:11:24.823 bdev subsystem finish successfully 00:11:24.823 nvmf threads destroy successfully 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.823 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.083 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.083 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:25.083 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.083 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 00:11:25.343 real 0m19.439s 00:11:25.343 user 0m45.979s 00:11:25.343 sys 0m5.816s 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 ************************************ 00:11:25.343 END TEST nvmf_example 00:11:25.343 ************************************ 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.343 ************************************ 00:11:25.343 START TEST nvmf_filesystem 00:11:25.343 ************************************ 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:25.343 * Looking for test storage... 00:11:25.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:25.343 22:17:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:25.603 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:25.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.604 --rc genhtml_branch_coverage=1 00:11:25.604 --rc genhtml_function_coverage=1 00:11:25.604 --rc genhtml_legend=1 00:11:25.604 --rc geninfo_all_blocks=1 00:11:25.604 --rc geninfo_unexecuted_blocks=1 00:11:25.604 00:11:25.604 ' 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:25.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.604 --rc genhtml_branch_coverage=1 00:11:25.604 --rc genhtml_function_coverage=1 00:11:25.604 --rc genhtml_legend=1 00:11:25.604 --rc geninfo_all_blocks=1 00:11:25.604 --rc geninfo_unexecuted_blocks=1 00:11:25.604 00:11:25.604 ' 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:25.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.604 --rc genhtml_branch_coverage=1 00:11:25.604 --rc genhtml_function_coverage=1 00:11:25.604 --rc genhtml_legend=1 00:11:25.604 --rc geninfo_all_blocks=1 00:11:25.604 --rc geninfo_unexecuted_blocks=1 00:11:25.604 00:11:25.604 ' 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:25.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.604 --rc genhtml_branch_coverage=1 00:11:25.604 --rc genhtml_function_coverage=1 00:11:25.604 --rc genhtml_legend=1 00:11:25.604 --rc geninfo_all_blocks=1 00:11:25.604 --rc geninfo_unexecuted_blocks=1 00:11:25.604 00:11:25.604 ' 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:25.604 #define SPDK_CONFIG_H 00:11:25.604 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:25.604 #define SPDK_CONFIG_APPS 1 00:11:25.604 #define SPDK_CONFIG_ARCH native 00:11:25.604 #undef SPDK_CONFIG_ASAN 00:11:25.604 #undef SPDK_CONFIG_AVAHI 00:11:25.604 #undef SPDK_CONFIG_CET 00:11:25.604 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:25.604 #define SPDK_CONFIG_COVERAGE 1 00:11:25.604 #define SPDK_CONFIG_CROSS_PREFIX 00:11:25.604 #undef SPDK_CONFIG_CRYPTO 00:11:25.604 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:25.604 #undef SPDK_CONFIG_CUSTOMOCF 00:11:25.604 #undef SPDK_CONFIG_DAOS 00:11:25.604 #define SPDK_CONFIG_DAOS_DIR 00:11:25.604 #define SPDK_CONFIG_DEBUG 1 00:11:25.604 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:25.604 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:25.604 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:25.604 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:25.604 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:25.604 #undef SPDK_CONFIG_DPDK_UADK 00:11:25.604 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:25.604 #define SPDK_CONFIG_EXAMPLES 1 00:11:25.604 #undef SPDK_CONFIG_FC 00:11:25.604 #define SPDK_CONFIG_FC_PATH 00:11:25.604 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:25.604 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:25.604 #define SPDK_CONFIG_FSDEV 1 00:11:25.604 #undef SPDK_CONFIG_FUSE 00:11:25.604 #undef SPDK_CONFIG_FUZZER 00:11:25.604 #define SPDK_CONFIG_FUZZER_LIB 00:11:25.604 #undef SPDK_CONFIG_GOLANG 00:11:25.604 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:25.604 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:25.604 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:25.604 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:25.604 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:25.604 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:25.604 #undef SPDK_CONFIG_HAVE_LZ4 00:11:25.604 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:25.604 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:25.604 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:25.604 #define SPDK_CONFIG_IDXD 1 00:11:25.604 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:25.604 #undef SPDK_CONFIG_IPSEC_MB 00:11:25.604 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:25.604 #define SPDK_CONFIG_ISAL 1 00:11:25.604 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:25.604 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:25.604 #define SPDK_CONFIG_LIBDIR 00:11:25.604 #undef SPDK_CONFIG_LTO 00:11:25.604 #define SPDK_CONFIG_MAX_LCORES 128 00:11:25.604 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:25.604 #define SPDK_CONFIG_NVME_CUSE 1 00:11:25.604 #undef SPDK_CONFIG_OCF 00:11:25.604 #define SPDK_CONFIG_OCF_PATH 00:11:25.604 #define SPDK_CONFIG_OPENSSL_PATH 00:11:25.604 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:25.604 #define SPDK_CONFIG_PGO_DIR 00:11:25.604 #undef SPDK_CONFIG_PGO_USE 00:11:25.604 #define SPDK_CONFIG_PREFIX /usr/local 00:11:25.604 #undef SPDK_CONFIG_RAID5F 00:11:25.604 #undef SPDK_CONFIG_RBD 00:11:25.604 #define SPDK_CONFIG_RDMA 1 00:11:25.604 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:25.604 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:25.604 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:25.604 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:25.604 #define SPDK_CONFIG_SHARED 1 00:11:25.604 #undef SPDK_CONFIG_SMA 00:11:25.604 #define SPDK_CONFIG_TESTS 1 00:11:25.604 #undef SPDK_CONFIG_TSAN 00:11:25.604 #define SPDK_CONFIG_UBLK 1 00:11:25.604 #define SPDK_CONFIG_UBSAN 1 00:11:25.604 #undef SPDK_CONFIG_UNIT_TESTS 00:11:25.604 #undef SPDK_CONFIG_URING 00:11:25.604 #define SPDK_CONFIG_URING_PATH 00:11:25.604 #undef SPDK_CONFIG_URING_ZNS 00:11:25.604 #undef SPDK_CONFIG_USDT 00:11:25.604 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:25.604 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:25.604 #define SPDK_CONFIG_VFIO_USER 1 00:11:25.604 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:25.604 #define SPDK_CONFIG_VHOST 1 00:11:25.604 #define SPDK_CONFIG_VIRTIO 1 00:11:25.604 #undef SPDK_CONFIG_VTUNE 00:11:25.604 #define SPDK_CONFIG_VTUNE_DIR 00:11:25.604 #define SPDK_CONFIG_WERROR 1 00:11:25.604 #define SPDK_CONFIG_WPDK_DIR 00:11:25.604 #undef SPDK_CONFIG_XNVME 00:11:25.604 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.604 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.605 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 200246 ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 200246 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.0LCMK0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0LCMK0/tests/target /tmp/spdk.0LCMK0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88607551488 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552413696 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6944862208 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766175744 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087474688 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110486016 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47776026624 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=180224 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:25.606 * Looking for test storage... 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88607551488 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9159454720 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:25.606 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:25.607 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.866 --rc genhtml_branch_coverage=1 00:11:25.866 --rc genhtml_function_coverage=1 00:11:25.866 --rc genhtml_legend=1 00:11:25.866 --rc geninfo_all_blocks=1 00:11:25.866 --rc geninfo_unexecuted_blocks=1 00:11:25.866 00:11:25.866 ' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.866 --rc genhtml_branch_coverage=1 00:11:25.866 --rc genhtml_function_coverage=1 00:11:25.866 --rc genhtml_legend=1 00:11:25.866 --rc geninfo_all_blocks=1 00:11:25.866 --rc geninfo_unexecuted_blocks=1 00:11:25.866 00:11:25.866 ' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.866 --rc genhtml_branch_coverage=1 00:11:25.866 --rc genhtml_function_coverage=1 00:11:25.866 --rc genhtml_legend=1 00:11:25.866 --rc geninfo_all_blocks=1 00:11:25.866 --rc geninfo_unexecuted_blocks=1 00:11:25.866 00:11:25.866 ' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:25.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.866 --rc genhtml_branch_coverage=1 00:11:25.866 --rc genhtml_function_coverage=1 00:11:25.866 --rc genhtml_legend=1 00:11:25.866 --rc geninfo_all_blocks=1 00:11:25.866 --rc geninfo_unexecuted_blocks=1 00:11:25.866 00:11:25.866 ' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.866 22:17:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.444 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.444 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.444 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.444 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:32.445 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:32.445 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:32.445 Found net devices under 0000:af:00.0: cvl_0_0 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:32.445 Found net devices under 0000:af:00.1: cvl_0_1 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.445 22:17:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:11:32.445 00:11:32.445 --- 10.0.0.2 ping statistics --- 00:11:32.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.445 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:32.445 00:11:32.445 --- 10.0.0.1 ping statistics --- 00:11:32.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.445 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.445 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 ************************************ 00:11:32.446 START TEST nvmf_filesystem_no_in_capsule 00:11:32.446 ************************************ 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=203241 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 203241 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 203241 ']' 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 [2024-12-16 22:17:21.359984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:32.446 [2024-12-16 22:17:21.360023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.446 [2024-12-16 22:17:21.442319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.446 [2024-12-16 22:17:21.465287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.446 [2024-12-16 22:17:21.465325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.446 [2024-12-16 22:17:21.465332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.446 [2024-12-16 22:17:21.465338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.446 [2024-12-16 22:17:21.465344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.446 [2024-12-16 22:17:21.466641] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.446 [2024-12-16 22:17:21.466749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.446 [2024-12-16 22:17:21.466660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.446 [2024-12-16 22:17:21.466750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 [2024-12-16 22:17:21.606706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 [2024-12-16 22:17:21.766921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.446 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:32.446 { 00:11:32.446 "name": "Malloc1", 00:11:32.446 "aliases": [ 00:11:32.446 "a41fe115-59dc-4b85-ad9d-707c45966c48" 00:11:32.446 ], 00:11:32.446 "product_name": "Malloc disk", 00:11:32.446 "block_size": 512, 00:11:32.446 "num_blocks": 1048576, 00:11:32.446 "uuid": "a41fe115-59dc-4b85-ad9d-707c45966c48", 00:11:32.446 "assigned_rate_limits": { 00:11:32.446 "rw_ios_per_sec": 0, 00:11:32.446 "rw_mbytes_per_sec": 0, 00:11:32.446 "r_mbytes_per_sec": 0, 00:11:32.446 "w_mbytes_per_sec": 0 00:11:32.446 }, 00:11:32.446 "claimed": true, 00:11:32.446 "claim_type": "exclusive_write", 00:11:32.446 "zoned": false, 00:11:32.446 "supported_io_types": { 00:11:32.446 "read": true, 00:11:32.446 "write": true, 00:11:32.446 "unmap": true, 00:11:32.446 "flush": true, 00:11:32.446 "reset": true, 00:11:32.446 "nvme_admin": false, 00:11:32.446 "nvme_io": false, 00:11:32.446 "nvme_io_md": false, 00:11:32.446 "write_zeroes": true, 00:11:32.446 "zcopy": true, 00:11:32.446 "get_zone_info": false, 00:11:32.446 "zone_management": false, 00:11:32.446 "zone_append": false, 00:11:32.446 "compare": false, 00:11:32.446 "compare_and_write": false, 00:11:32.446 "abort": true, 00:11:32.446 "seek_hole": false, 00:11:32.446 "seek_data": false, 00:11:32.446 "copy": true, 00:11:32.446 "nvme_iov_md": false 00:11:32.446 }, 00:11:32.446 "memory_domains": [ 00:11:32.446 { 00:11:32.446 "dma_device_id": "system", 00:11:32.446 "dma_device_type": 1 00:11:32.446 }, 00:11:32.446 { 00:11:32.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.446 "dma_device_type": 2 00:11:32.446 } 00:11:32.446 ], 00:11:32.446 "driver_specific": {} 00:11:32.446 } 00:11:32.446 ]' 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:32.447 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.382 22:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.382 22:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:33.382 22:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.382 22:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:33.382 22:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:35.282 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:35.282 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:35.282 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.539 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:35.539 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.539 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:35.539 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:35.539 22:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:35.539 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:35.540 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:35.797 22:17:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:36.732 22:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 ************************************ 00:11:37.667 START TEST filesystem_ext4 00:11:37.667 ************************************ 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:37.667 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:37.667 mke2fs 1.47.0 (5-Feb-2023) 00:11:37.667 Discarding device blocks: 0/522240 done 00:11:37.667 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:37.667 Filesystem UUID: 8ef7b99c-3cb3-4e1c-827c-544a7db3dc5d 00:11:37.667 Superblock backups stored on blocks: 00:11:37.667 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:37.667 00:11:37.667 Allocating group tables: 0/64 done 00:11:37.667 Writing inode tables: 0/64 done 00:11:37.926 Creating journal (8192 blocks): done 00:11:37.926 Writing superblocks and filesystem accounting information: 0/64 done 00:11:37.926 00:11:37.926 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:37.926 22:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 203241 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.491 00:11:44.491 real 0m6.171s 00:11:44.491 user 0m0.023s 00:11:44.491 sys 0m0.122s 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:44.491 ************************************ 00:11:44.491 END TEST filesystem_ext4 00:11:44.491 ************************************ 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.491 ************************************ 00:11:44.491 START TEST filesystem_btrfs 00:11:44.491 ************************************ 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:44.491 btrfs-progs v6.8.1 00:11:44.491 See https://btrfs.readthedocs.io for more information. 00:11:44.491 00:11:44.491 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:44.491 NOTE: several default settings have changed in version 5.15, please make sure 00:11:44.491 this does not affect your deployments: 00:11:44.491 - DUP for metadata (-m dup) 00:11:44.491 - enabled no-holes (-O no-holes) 00:11:44.491 - enabled free-space-tree (-R free-space-tree) 00:11:44.491 00:11:44.491 Label: (null) 00:11:44.491 UUID: 2f9186ef-1e07-48d2-97f3-53754146766e 00:11:44.491 Node size: 16384 00:11:44.491 Sector size: 4096 (CPU page size: 4096) 00:11:44.491 Filesystem size: 510.00MiB 00:11:44.491 Block group profiles: 00:11:44.491 Data: single 8.00MiB 00:11:44.491 Metadata: DUP 32.00MiB 00:11:44.491 System: DUP 8.00MiB 00:11:44.491 SSD detected: yes 00:11:44.491 Zoned device: no 00:11:44.491 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:44.491 Checksum: crc32c 00:11:44.491 Number of devices: 1 00:11:44.491 Devices: 00:11:44.491 ID SIZE PATH 00:11:44.491 1 510.00MiB /dev/nvme0n1p1 00:11:44.491 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:44.491 22:17:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 203241 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.058 00:11:45.058 real 0m1.162s 00:11:45.058 user 0m0.019s 00:11:45.058 sys 0m0.162s 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:45.058 ************************************ 00:11:45.058 END TEST filesystem_btrfs 00:11:45.058 ************************************ 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.058 ************************************ 00:11:45.058 START TEST filesystem_xfs 00:11:45.058 ************************************ 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.058 22:17:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:45.317 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:45.317 = sectsz=512 attr=2, projid32bit=1 00:11:45.317 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:45.317 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:45.317 data = bsize=4096 blocks=130560, imaxpct=25 00:11:45.317 = sunit=0 swidth=0 blks 00:11:45.317 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:45.317 log =internal log bsize=4096 blocks=16384, version=2 00:11:45.317 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:45.317 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:46.255 Discarding blocks...Done. 00:11:46.255 22:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.255 22:17:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 203241 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.786 00:11:48.786 real 0m3.432s 00:11:48.786 user 0m0.023s 00:11:48.786 sys 0m0.125s 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.786 ************************************ 00:11:48.786 END TEST filesystem_xfs 00:11:48.786 ************************************ 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.786 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 203241 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 203241 ']' 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 203241 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203241 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203241' 00:11:48.787 killing process with pid 203241 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 203241 00:11:48.787 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 203241 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:49.046 00:11:49.046 real 0m17.321s 00:11:49.046 user 1m8.191s 00:11:49.046 sys 0m1.569s 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.046 ************************************ 00:11:49.046 END TEST nvmf_filesystem_no_in_capsule 00:11:49.046 ************************************ 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.046 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:49.046 ************************************ 00:11:49.046 START TEST nvmf_filesystem_in_capsule 00:11:49.046 ************************************ 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=206374 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 206374 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 206374 ']' 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.047 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.306 [2024-12-16 22:17:38.750703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:49.306 [2024-12-16 22:17:38.750743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.306 [2024-12-16 22:17:38.825006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.306 [2024-12-16 22:17:38.847802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.306 [2024-12-16 22:17:38.847839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.306 [2024-12-16 22:17:38.847846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.306 [2024-12-16 22:17:38.847852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.306 [2024-12-16 22:17:38.847857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.306 [2024-12-16 22:17:38.849143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.306 [2024-12-16 22:17:38.849265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.306 [2024-12-16 22:17:38.849303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.306 [2024-12-16 22:17:38.849304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.306 [2024-12-16 22:17:38.977000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.306 22:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.565 Malloc1 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.565 [2024-12-16 22:17:39.128343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.565 { 00:11:49.565 "name": "Malloc1", 00:11:49.565 "aliases": [ 00:11:49.565 "6f325419-aeb1-4470-ab22-8bc7b037fc18" 00:11:49.565 ], 00:11:49.565 "product_name": "Malloc disk", 00:11:49.565 "block_size": 512, 00:11:49.565 "num_blocks": 1048576, 00:11:49.565 "uuid": "6f325419-aeb1-4470-ab22-8bc7b037fc18", 00:11:49.565 "assigned_rate_limits": { 00:11:49.565 "rw_ios_per_sec": 0, 00:11:49.565 "rw_mbytes_per_sec": 0, 00:11:49.565 "r_mbytes_per_sec": 0, 00:11:49.565 "w_mbytes_per_sec": 0 00:11:49.565 }, 00:11:49.565 "claimed": true, 00:11:49.565 "claim_type": "exclusive_write", 00:11:49.565 "zoned": false, 00:11:49.565 "supported_io_types": { 00:11:49.565 "read": true, 00:11:49.565 "write": true, 00:11:49.565 "unmap": true, 00:11:49.565 "flush": true, 00:11:49.565 "reset": true, 00:11:49.565 "nvme_admin": false, 00:11:49.565 "nvme_io": false, 00:11:49.565 "nvme_io_md": false, 00:11:49.565 "write_zeroes": true, 00:11:49.565 "zcopy": true, 00:11:49.565 "get_zone_info": false, 00:11:49.565 "zone_management": false, 00:11:49.565 "zone_append": false, 00:11:49.565 "compare": false, 00:11:49.565 "compare_and_write": false, 00:11:49.565 "abort": true, 00:11:49.565 "seek_hole": false, 00:11:49.565 "seek_data": false, 00:11:49.565 "copy": true, 00:11:49.565 "nvme_iov_md": false 00:11:49.565 }, 00:11:49.565 "memory_domains": [ 00:11:49.565 { 00:11:49.565 "dma_device_id": "system", 00:11:49.565 "dma_device_type": 1 00:11:49.565 }, 00:11:49.565 { 00:11:49.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.565 "dma_device_type": 2 00:11:49.565 } 00:11:49.565 ], 00:11:49.565 "driver_specific": {} 00:11:49.565 } 00:11:49.565 ]' 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.565 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.949 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.949 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.949 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.949 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.949 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:52.850 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.109 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.367 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.302 ************************************ 00:11:54.302 START TEST filesystem_in_capsule_ext4 00:11:54.302 ************************************ 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:54.302 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.302 mke2fs 1.47.0 (5-Feb-2023) 00:11:54.302 Discarding device blocks: 0/522240 done 00:11:54.302 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.302 Filesystem UUID: a0af3f48-1445-4afd-b716-9edafa3e8aa2 00:11:54.302 Superblock backups stored on blocks: 00:11:54.302 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.302 00:11:54.302 Allocating group tables: 0/64 done 00:11:54.302 Writing inode tables: 0/64 done 00:11:55.676 Creating journal (8192 blocks): done 00:11:56.192 Writing superblocks and filesystem accounting information: 0/64 done 00:11:56.192 00:11:56.192 22:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:56.192 22:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 206374 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.760 00:12:02.760 real 0m7.737s 00:12:02.760 user 0m0.025s 00:12:02.760 sys 0m0.074s 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:02.760 ************************************ 00:12:02.760 END TEST filesystem_in_capsule_ext4 00:12:02.760 ************************************ 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.760 ************************************ 00:12:02.760 START TEST filesystem_in_capsule_btrfs 00:12:02.760 ************************************ 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:02.760 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:02.760 btrfs-progs v6.8.1 00:12:02.760 See https://btrfs.readthedocs.io for more information. 00:12:02.760 00:12:02.760 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:02.760 NOTE: several default settings have changed in version 5.15, please make sure 00:12:02.760 this does not affect your deployments: 00:12:02.760 - DUP for metadata (-m dup) 00:12:02.760 - enabled no-holes (-O no-holes) 00:12:02.760 - enabled free-space-tree (-R free-space-tree) 00:12:02.760 00:12:02.760 Label: (null) 00:12:02.760 UUID: 02f2db70-9135-4474-9a88-664f1b0201ae 00:12:02.760 Node size: 16384 00:12:02.760 Sector size: 4096 (CPU page size: 4096) 00:12:02.760 Filesystem size: 510.00MiB 00:12:02.760 Block group profiles: 00:12:02.760 Data: single 8.00MiB 00:12:02.760 Metadata: DUP 32.00MiB 00:12:02.760 System: DUP 8.00MiB 00:12:02.760 SSD detected: yes 00:12:02.760 Zoned device: no 00:12:02.760 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:02.760 Checksum: crc32c 00:12:02.760 Number of devices: 1 00:12:02.760 Devices: 00:12:02.760 ID SIZE PATH 00:12:02.760 1 510.00MiB /dev/nvme0n1p1 00:12:02.760 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 206374 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.760 00:12:02.760 real 0m0.729s 00:12:02.760 user 0m0.019s 00:12:02.760 sys 0m0.121s 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.760 ************************************ 00:12:02.760 END TEST filesystem_in_capsule_btrfs 00:12:02.760 ************************************ 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.760 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.019 ************************************ 00:12:03.019 START TEST filesystem_in_capsule_xfs 00:12:03.019 ************************************ 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.019 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.019 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.019 = sectsz=512 attr=2, projid32bit=1 00:12:03.019 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.019 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.019 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.019 = sunit=0 swidth=0 blks 00:12:03.019 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.019 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.019 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.019 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:03.956 Discarding blocks...Done. 00:12:03.956 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.957 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 206374 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.862 00:12:05.862 real 0m2.769s 00:12:05.862 user 0m0.026s 00:12:05.862 sys 0m0.072s 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.862 ************************************ 00:12:05.862 END TEST filesystem_in_capsule_xfs 00:12:05.862 ************************************ 00:12:05.862 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 206374 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 206374 ']' 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 206374 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206374 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206374' 00:12:06.122 killing process with pid 206374 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 206374 00:12:06.122 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 206374 00:12:06.693 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:06.693 00:12:06.693 real 0m17.409s 00:12:06.693 user 1m8.553s 00:12:06.693 sys 0m1.420s 00:12:06.693 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.693 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.693 ************************************ 00:12:06.693 END TEST nvmf_filesystem_in_capsule 00:12:06.693 ************************************ 00:12:06.693 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:06.693 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.694 rmmod nvme_tcp 00:12:06.694 rmmod nvme_fabrics 00:12:06.694 rmmod nvme_keyring 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.694 22:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.623 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.623 00:12:08.623 real 0m43.390s 00:12:08.623 user 2m18.751s 00:12:08.623 sys 0m7.653s 00:12:08.623 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.623 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.623 ************************************ 00:12:08.623 END TEST nvmf_filesystem 00:12:08.623 ************************************ 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.884 ************************************ 00:12:08.884 START TEST nvmf_target_discovery 00:12:08.884 ************************************ 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:08.884 * Looking for test storage... 00:12:08.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.884 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.885 --rc genhtml_branch_coverage=1 00:12:08.885 --rc genhtml_function_coverage=1 00:12:08.885 --rc genhtml_legend=1 00:12:08.885 --rc geninfo_all_blocks=1 00:12:08.885 --rc geninfo_unexecuted_blocks=1 00:12:08.885 00:12:08.885 ' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.885 --rc genhtml_branch_coverage=1 00:12:08.885 --rc genhtml_function_coverage=1 00:12:08.885 --rc genhtml_legend=1 00:12:08.885 --rc geninfo_all_blocks=1 00:12:08.885 --rc geninfo_unexecuted_blocks=1 00:12:08.885 00:12:08.885 ' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.885 --rc genhtml_branch_coverage=1 00:12:08.885 --rc genhtml_function_coverage=1 00:12:08.885 --rc genhtml_legend=1 00:12:08.885 --rc geninfo_all_blocks=1 00:12:08.885 --rc geninfo_unexecuted_blocks=1 00:12:08.885 00:12:08.885 ' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.885 --rc genhtml_branch_coverage=1 00:12:08.885 --rc genhtml_function_coverage=1 00:12:08.885 --rc genhtml_legend=1 00:12:08.885 --rc geninfo_all_blocks=1 00:12:08.885 --rc geninfo_unexecuted_blocks=1 00:12:08.885 00:12:08.885 ' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:08.885 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.886 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.465 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:15.466 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:15.466 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:15.466 Found net devices under 0000:af:00.0: cvl_0_0 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:15.466 Found net devices under 0000:af:00.1: cvl_0_1 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:12:15.466 00:12:15.466 --- 10.0.0.2 ping statistics --- 00:12:15.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.466 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:12:15.466 00:12:15.466 --- 10.0.0.1 ping statistics --- 00:12:15.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.466 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.466 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=212986 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 212986 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 212986 ']' 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 [2024-12-16 22:18:04.566749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:15.467 [2024-12-16 22:18:04.566799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.467 [2024-12-16 22:18:04.644462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.467 [2024-12-16 22:18:04.667262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.467 [2024-12-16 22:18:04.667299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.467 [2024-12-16 22:18:04.667306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.467 [2024-12-16 22:18:04.667313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.467 [2024-12-16 22:18:04.667318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.467 [2024-12-16 22:18:04.668776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.467 [2024-12-16 22:18:04.668883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.467 [2024-12-16 22:18:04.668989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.467 [2024-12-16 22:18:04.668989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 [2024-12-16 22:18:04.800667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 Null1 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 [2024-12-16 22:18:04.857328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 Null2 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 Null3 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.467 Null4 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.467 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.468 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:15.468 00:12:15.468 Discovery Log Number of Records 6, Generation counter 6 00:12:15.468 =====Discovery Log Entry 0====== 00:12:15.468 trtype: tcp 00:12:15.468 adrfam: ipv4 00:12:15.468 subtype: current discovery subsystem 00:12:15.468 treq: not required 00:12:15.468 portid: 0 00:12:15.468 trsvcid: 4420 00:12:15.468 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.468 traddr: 10.0.0.2 00:12:15.468 eflags: explicit discovery connections, duplicate discovery information 00:12:15.468 sectype: none 00:12:15.468 =====Discovery Log Entry 1====== 00:12:15.468 trtype: tcp 00:12:15.468 adrfam: ipv4 00:12:15.468 subtype: nvme subsystem 00:12:15.468 treq: not required 00:12:15.468 portid: 0 00:12:15.468 trsvcid: 4420 00:12:15.468 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.468 traddr: 10.0.0.2 00:12:15.468 eflags: none 00:12:15.468 sectype: none 00:12:15.468 =====Discovery Log Entry 2====== 00:12:15.468 trtype: tcp 00:12:15.468 adrfam: ipv4 00:12:15.468 subtype: nvme subsystem 00:12:15.468 treq: not required 00:12:15.468 portid: 0 00:12:15.468 trsvcid: 4420 00:12:15.468 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:15.468 traddr: 10.0.0.2 00:12:15.468 eflags: none 00:12:15.468 sectype: none 00:12:15.468 =====Discovery Log Entry 3====== 00:12:15.468 trtype: tcp 00:12:15.468 adrfam: ipv4 00:12:15.468 subtype: nvme subsystem 00:12:15.468 treq: not required 00:12:15.468 portid: 0 00:12:15.468 trsvcid: 4420 00:12:15.468 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:15.468 traddr: 10.0.0.2 00:12:15.468 eflags: none 00:12:15.468 sectype: none 00:12:15.468 =====Discovery Log Entry 4====== 00:12:15.468 trtype: tcp 00:12:15.468 adrfam: ipv4 00:12:15.468 subtype: nvme subsystem 00:12:15.468 treq: not required 00:12:15.468 portid: 0 00:12:15.468 trsvcid: 4420 00:12:15.468 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:15.468 traddr: 10.0.0.2 00:12:15.468 eflags: none 00:12:15.468 sectype: none 00:12:15.468 =====Discovery Log Entry 5====== 00:12:15.468 trtype: tcp 00:12:15.468 adrfam: ipv4 00:12:15.468 subtype: discovery subsystem referral 00:12:15.468 treq: not required 00:12:15.468 portid: 0 00:12:15.468 trsvcid: 4430 00:12:15.468 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.468 traddr: 10.0.0.2 00:12:15.468 eflags: none 00:12:15.468 sectype: none 00:12:15.468 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:15.468 Perform nvmf subsystem discovery via RPC 00:12:15.468 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:15.468 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.468 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.729 [ 00:12:15.729 { 00:12:15.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:15.729 "subtype": "Discovery", 00:12:15.729 "listen_addresses": [ 00:12:15.729 { 00:12:15.729 "trtype": "TCP", 00:12:15.729 "adrfam": "IPv4", 00:12:15.729 "traddr": "10.0.0.2", 00:12:15.729 "trsvcid": "4420" 00:12:15.729 } 00:12:15.729 ], 00:12:15.729 "allow_any_host": true, 00:12:15.729 "hosts": [] 00:12:15.729 }, 00:12:15.729 { 00:12:15.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.729 "subtype": "NVMe", 00:12:15.729 "listen_addresses": [ 00:12:15.729 { 00:12:15.729 "trtype": "TCP", 00:12:15.729 "adrfam": "IPv4", 00:12:15.729 "traddr": "10.0.0.2", 00:12:15.729 "trsvcid": "4420" 00:12:15.729 } 00:12:15.729 ], 00:12:15.729 "allow_any_host": true, 00:12:15.729 "hosts": [], 00:12:15.729 "serial_number": "SPDK00000000000001", 00:12:15.729 "model_number": "SPDK bdev Controller", 00:12:15.729 "max_namespaces": 32, 00:12:15.729 "min_cntlid": 1, 00:12:15.729 "max_cntlid": 65519, 00:12:15.729 "namespaces": [ 00:12:15.729 { 00:12:15.729 "nsid": 1, 00:12:15.729 "bdev_name": "Null1", 00:12:15.729 "name": "Null1", 00:12:15.729 "nguid": "911C9825003B4DFEB0A09910DEAF1368", 00:12:15.729 "uuid": "911c9825-003b-4dfe-b0a0-9910deaf1368" 00:12:15.729 } 00:12:15.729 ] 00:12:15.729 }, 00:12:15.729 { 00:12:15.729 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:15.729 "subtype": "NVMe", 00:12:15.729 "listen_addresses": [ 00:12:15.729 { 00:12:15.729 "trtype": "TCP", 00:12:15.729 "adrfam": "IPv4", 00:12:15.729 "traddr": "10.0.0.2", 00:12:15.729 "trsvcid": "4420" 00:12:15.729 } 00:12:15.729 ], 00:12:15.729 "allow_any_host": true, 00:12:15.729 "hosts": [], 00:12:15.729 "serial_number": "SPDK00000000000002", 00:12:15.729 "model_number": "SPDK bdev Controller", 00:12:15.729 "max_namespaces": 32, 00:12:15.729 "min_cntlid": 1, 00:12:15.729 "max_cntlid": 65519, 00:12:15.729 "namespaces": [ 00:12:15.729 { 00:12:15.729 "nsid": 1, 00:12:15.729 "bdev_name": "Null2", 00:12:15.729 "name": "Null2", 00:12:15.729 "nguid": "93A05F61A74B4C0D9343D69B18C42230", 00:12:15.729 "uuid": "93a05f61-a74b-4c0d-9343-d69b18c42230" 00:12:15.729 } 00:12:15.729 ] 00:12:15.729 }, 00:12:15.729 { 00:12:15.729 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:15.729 "subtype": "NVMe", 00:12:15.729 "listen_addresses": [ 00:12:15.729 { 00:12:15.729 "trtype": "TCP", 00:12:15.729 "adrfam": "IPv4", 00:12:15.729 "traddr": "10.0.0.2", 00:12:15.729 "trsvcid": "4420" 00:12:15.729 } 00:12:15.729 ], 00:12:15.729 "allow_any_host": true, 00:12:15.729 "hosts": [], 00:12:15.730 "serial_number": "SPDK00000000000003", 00:12:15.730 "model_number": "SPDK bdev Controller", 00:12:15.730 "max_namespaces": 32, 00:12:15.730 "min_cntlid": 1, 00:12:15.730 "max_cntlid": 65519, 00:12:15.730 "namespaces": [ 00:12:15.730 { 00:12:15.730 "nsid": 1, 00:12:15.730 "bdev_name": "Null3", 00:12:15.730 "name": "Null3", 00:12:15.730 "nguid": "C169348740D34BF781F42603197B300A", 00:12:15.730 "uuid": "c1693487-40d3-4bf7-81f4-2603197b300a" 00:12:15.730 } 00:12:15.730 ] 00:12:15.730 }, 00:12:15.730 { 00:12:15.730 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:15.730 "subtype": "NVMe", 00:12:15.730 "listen_addresses": [ 00:12:15.730 { 00:12:15.730 "trtype": "TCP", 00:12:15.730 "adrfam": "IPv4", 00:12:15.730 "traddr": "10.0.0.2", 00:12:15.730 "trsvcid": "4420" 00:12:15.730 } 00:12:15.730 ], 00:12:15.730 "allow_any_host": true, 00:12:15.730 "hosts": [], 00:12:15.730 "serial_number": "SPDK00000000000004", 00:12:15.730 "model_number": "SPDK bdev Controller", 00:12:15.730 "max_namespaces": 32, 00:12:15.730 "min_cntlid": 1, 00:12:15.730 "max_cntlid": 65519, 00:12:15.730 "namespaces": [ 00:12:15.730 { 00:12:15.730 "nsid": 1, 00:12:15.730 "bdev_name": "Null4", 00:12:15.730 "name": "Null4", 00:12:15.730 "nguid": "D2C559FD2C8A46E0AF289D13B3653F67", 00:12:15.730 "uuid": "d2c559fd-2c8a-46e0-af28-9d13b3653f67" 00:12:15.730 } 00:12:15.730 ] 00:12:15.730 } 00:12:15.730 ] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.730 rmmod nvme_tcp 00:12:15.730 rmmod nvme_fabrics 00:12:15.730 rmmod nvme_keyring 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 212986 ']' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 212986 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 212986 ']' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 212986 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212986 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212986' 00:12:15.730 killing process with pid 212986 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 212986 00:12:15.730 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 212986 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.990 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.532 00:12:18.532 real 0m9.279s 00:12:18.532 user 0m5.494s 00:12:18.532 sys 0m4.787s 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.532 ************************************ 00:12:18.532 END TEST nvmf_target_discovery 00:12:18.532 ************************************ 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.532 ************************************ 00:12:18.532 START TEST nvmf_referrals 00:12:18.532 ************************************ 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.532 * Looking for test storage... 00:12:18.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:18.532 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.533 --rc genhtml_branch_coverage=1 00:12:18.533 --rc genhtml_function_coverage=1 00:12:18.533 --rc genhtml_legend=1 00:12:18.533 --rc geninfo_all_blocks=1 00:12:18.533 --rc geninfo_unexecuted_blocks=1 00:12:18.533 00:12:18.533 ' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.533 --rc genhtml_branch_coverage=1 00:12:18.533 --rc genhtml_function_coverage=1 00:12:18.533 --rc genhtml_legend=1 00:12:18.533 --rc geninfo_all_blocks=1 00:12:18.533 --rc geninfo_unexecuted_blocks=1 00:12:18.533 00:12:18.533 ' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.533 --rc genhtml_branch_coverage=1 00:12:18.533 --rc genhtml_function_coverage=1 00:12:18.533 --rc genhtml_legend=1 00:12:18.533 --rc geninfo_all_blocks=1 00:12:18.533 --rc geninfo_unexecuted_blocks=1 00:12:18.533 00:12:18.533 ' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.533 --rc genhtml_branch_coverage=1 00:12:18.533 --rc genhtml_function_coverage=1 00:12:18.533 --rc genhtml_legend=1 00:12:18.533 --rc geninfo_all_blocks=1 00:12:18.533 --rc geninfo_unexecuted_blocks=1 00:12:18.533 00:12:18.533 ' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.533 22:18:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:25.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:25.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:25.112 Found net devices under 0000:af:00.0: cvl_0_0 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:25.112 Found net devices under 0000:af:00.1: cvl_0_1 00:12:25.112 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:12:25.113 00:12:25.113 --- 10.0.0.2 ping statistics --- 00:12:25.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.113 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:25.113 00:12:25.113 --- 10.0.0.1 ping statistics --- 00:12:25.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.113 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=216988 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 216988 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 216988 ']' 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.113 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 [2024-12-16 22:18:14.003450] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:25.113 [2024-12-16 22:18:14.003496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.113 [2024-12-16 22:18:14.080865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.113 [2024-12-16 22:18:14.104036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.113 [2024-12-16 22:18:14.104074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.113 [2024-12-16 22:18:14.104081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.113 [2024-12-16 22:18:14.104087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.113 [2024-12-16 22:18:14.104092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.113 [2024-12-16 22:18:14.105581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.113 [2024-12-16 22:18:14.105690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.113 [2024-12-16 22:18:14.105771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.113 [2024-12-16 22:18:14.105772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 [2024-12-16 22:18:14.239021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 [2024-12-16 22:18:14.263338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.113 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.114 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.373 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:25.373 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:25.373 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:25.373 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.374 22:18:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.631 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.889 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.890 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.148 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:26.407 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:26.407 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:26.407 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:26.407 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:26.407 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.407 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.666 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.666 rmmod nvme_tcp 00:12:26.927 rmmod nvme_fabrics 00:12:26.927 rmmod nvme_keyring 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 216988 ']' 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 216988 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 216988 ']' 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 216988 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216988 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216988' 00:12:26.927 killing process with pid 216988 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 216988 00:12:26.927 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 216988 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.187 22:18:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.096 00:12:29.096 real 0m11.002s 00:12:29.096 user 0m12.594s 00:12:29.096 sys 0m5.243s 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.096 ************************************ 00:12:29.096 END TEST nvmf_referrals 00:12:29.096 ************************************ 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.096 ************************************ 00:12:29.096 START TEST nvmf_connect_disconnect 00:12:29.096 ************************************ 00:12:29.096 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.357 * Looking for test storage... 00:12:29.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:29.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.357 --rc genhtml_branch_coverage=1 00:12:29.357 --rc genhtml_function_coverage=1 00:12:29.357 --rc genhtml_legend=1 00:12:29.357 --rc geninfo_all_blocks=1 00:12:29.357 --rc geninfo_unexecuted_blocks=1 00:12:29.357 00:12:29.357 ' 00:12:29.357 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:29.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.357 --rc genhtml_branch_coverage=1 00:12:29.357 --rc genhtml_function_coverage=1 00:12:29.357 --rc genhtml_legend=1 00:12:29.357 --rc geninfo_all_blocks=1 00:12:29.357 --rc geninfo_unexecuted_blocks=1 00:12:29.358 00:12:29.358 ' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:29.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.358 --rc genhtml_branch_coverage=1 00:12:29.358 --rc genhtml_function_coverage=1 00:12:29.358 --rc genhtml_legend=1 00:12:29.358 --rc geninfo_all_blocks=1 00:12:29.358 --rc geninfo_unexecuted_blocks=1 00:12:29.358 00:12:29.358 ' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:29.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.358 --rc genhtml_branch_coverage=1 00:12:29.358 --rc genhtml_function_coverage=1 00:12:29.358 --rc genhtml_legend=1 00:12:29.358 --rc geninfo_all_blocks=1 00:12:29.358 --rc geninfo_unexecuted_blocks=1 00:12:29.358 00:12:29.358 ' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.358 22:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.358 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.359 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.359 22:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.940 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:35.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:35.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:35.941 Found net devices under 0000:af:00.0: cvl_0_0 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:35.941 Found net devices under 0000:af:00.1: cvl_0_1 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:12:35.941 00:12:35.941 --- 10.0.0.2 ping statistics --- 00:12:35.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.941 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:35.941 22:18:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:35.942 00:12:35.942 --- 10.0.0.1 ping statistics --- 00:12:35.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.942 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=220993 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 220993 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 220993 ']' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 [2024-12-16 22:18:25.101928] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:35.942 [2024-12-16 22:18:25.101972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.942 [2024-12-16 22:18:25.176357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.942 [2024-12-16 22:18:25.198535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.942 [2024-12-16 22:18:25.198571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.942 [2024-12-16 22:18:25.198579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.942 [2024-12-16 22:18:25.198585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.942 [2024-12-16 22:18:25.198589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.942 [2024-12-16 22:18:25.199946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.942 [2024-12-16 22:18:25.200052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.942 [2024-12-16 22:18:25.200138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.942 [2024-12-16 22:18:25.200138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 [2024-12-16 22:18:25.356621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 [2024-12-16 22:18:25.419777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:35.942 22:18:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:38.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.039 [2024-12-16 22:19:41.623034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:13:52.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.863 [2024-12-16 22:20:28.197052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:14:38.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.024 [2024-12-16 22:21:21.372073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:15:32.024 [2024-12-16 22:21:21.372112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:15:32.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.923 [2024-12-16 22:21:42.117122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:15:52.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.786 [2024-12-16 22:21:51.311005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:16:01.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.234 [2024-12-16 22:22:07.527050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572700 is same with the state(6) to be set 00:16:18.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.095 rmmod nvme_tcp 00:16:27.095 rmmod nvme_fabrics 00:16:27.095 rmmod nvme_keyring 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:27.095 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 220993 ']' 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 220993 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 220993 ']' 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 220993 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220993 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220993' 00:16:27.354 killing process with pid 220993 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 220993 00:16:27.354 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 220993 00:16:27.354 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.354 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:27.354 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:27.354 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:27.354 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:27.354 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:27.355 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:27.355 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.355 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:27.355 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.355 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.355 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:29.894 00:16:29.894 real 4m0.312s 00:16:29.894 user 15m17.642s 00:16:29.894 sys 0m24.839s 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:29.894 ************************************ 00:16:29.894 END TEST nvmf_connect_disconnect 00:16:29.894 ************************************ 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:29.894 ************************************ 00:16:29.894 START TEST nvmf_multitarget 00:16:29.894 ************************************ 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:29.894 * Looking for test storage... 00:16:29.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:29.894 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.895 --rc genhtml_branch_coverage=1 00:16:29.895 --rc genhtml_function_coverage=1 00:16:29.895 --rc genhtml_legend=1 00:16:29.895 --rc geninfo_all_blocks=1 00:16:29.895 --rc geninfo_unexecuted_blocks=1 00:16:29.895 00:16:29.895 ' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.895 --rc genhtml_branch_coverage=1 00:16:29.895 --rc genhtml_function_coverage=1 00:16:29.895 --rc genhtml_legend=1 00:16:29.895 --rc geninfo_all_blocks=1 00:16:29.895 --rc geninfo_unexecuted_blocks=1 00:16:29.895 00:16:29.895 ' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.895 --rc genhtml_branch_coverage=1 00:16:29.895 --rc genhtml_function_coverage=1 00:16:29.895 --rc genhtml_legend=1 00:16:29.895 --rc geninfo_all_blocks=1 00:16:29.895 --rc geninfo_unexecuted_blocks=1 00:16:29.895 00:16:29.895 ' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.895 --rc genhtml_branch_coverage=1 00:16:29.895 --rc genhtml_function_coverage=1 00:16:29.895 --rc genhtml_legend=1 00:16:29.895 --rc geninfo_all_blocks=1 00:16:29.895 --rc geninfo_unexecuted_blocks=1 00:16:29.895 00:16:29.895 ' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:29.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.895 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.896 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:29.896 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:29.896 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:29.896 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:36.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:36.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.471 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:36.472 Found net devices under 0000:af:00.0: cvl_0_0 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:36.472 Found net devices under 0000:af:00.1: cvl_0_1 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.472 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:16:36.472 00:16:36.472 --- 10.0.0.2 ping statistics --- 00:16:36.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.472 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:16:36.472 00:16:36.472 --- 10.0.0.1 ping statistics --- 00:16:36.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.472 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=263860 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 263860 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 263860 ']' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:36.472 [2024-12-16 22:22:25.295056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:36.472 [2024-12-16 22:22:25.295100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.472 [2024-12-16 22:22:25.372689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.472 [2024-12-16 22:22:25.395657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.472 [2024-12-16 22:22:25.395693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.472 [2024-12-16 22:22:25.395700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.472 [2024-12-16 22:22:25.395705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.472 [2024-12-16 22:22:25.395710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.472 [2024-12-16 22:22:25.397044] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.472 [2024-12-16 22:22:25.397153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.472 [2024-12-16 22:22:25.397169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.472 [2024-12-16 22:22:25.397176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:36.472 "nvmf_tgt_1" 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:36.472 "nvmf_tgt_2" 00:16:36.472 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:36.473 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:36.473 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:36.473 22:22:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:36.473 true 00:16:36.473 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:36.473 true 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.731 rmmod nvme_tcp 00:16:36.731 rmmod nvme_fabrics 00:16:36.731 rmmod nvme_keyring 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 263860 ']' 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 263860 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 263860 ']' 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 263860 00:16:36.731 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263860 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263860' 00:16:36.732 killing process with pid 263860 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 263860 00:16:36.732 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 263860 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.992 22:22:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.531 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:39.531 00:16:39.531 real 0m9.462s 00:16:39.531 user 0m7.170s 00:16:39.531 sys 0m4.780s 00:16:39.531 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.531 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:39.531 ************************************ 00:16:39.531 END TEST nvmf_multitarget 00:16:39.532 ************************************ 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:39.532 ************************************ 00:16:39.532 START TEST nvmf_rpc 00:16:39.532 ************************************ 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:39.532 * Looking for test storage... 00:16:39.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:39.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.532 --rc genhtml_branch_coverage=1 00:16:39.532 --rc genhtml_function_coverage=1 00:16:39.532 --rc genhtml_legend=1 00:16:39.532 --rc geninfo_all_blocks=1 00:16:39.532 --rc geninfo_unexecuted_blocks=1 00:16:39.532 00:16:39.532 ' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:39.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.532 --rc genhtml_branch_coverage=1 00:16:39.532 --rc genhtml_function_coverage=1 00:16:39.532 --rc genhtml_legend=1 00:16:39.532 --rc geninfo_all_blocks=1 00:16:39.532 --rc geninfo_unexecuted_blocks=1 00:16:39.532 00:16:39.532 ' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:39.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.532 --rc genhtml_branch_coverage=1 00:16:39.532 --rc genhtml_function_coverage=1 00:16:39.532 --rc genhtml_legend=1 00:16:39.532 --rc geninfo_all_blocks=1 00:16:39.532 --rc geninfo_unexecuted_blocks=1 00:16:39.532 00:16:39.532 ' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:39.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:39.532 --rc genhtml_branch_coverage=1 00:16:39.532 --rc genhtml_function_coverage=1 00:16:39.532 --rc genhtml_legend=1 00:16:39.532 --rc geninfo_all_blocks=1 00:16:39.532 --rc geninfo_unexecuted_blocks=1 00:16:39.532 00:16:39.532 ' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:39.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:39.532 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:39.533 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:44.812 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:44.812 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:44.812 Found net devices under 0000:af:00.0: cvl_0_0 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.812 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:44.813 Found net devices under 0000:af:00.1: cvl_0_1 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.813 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:45.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:16:45.072 00:16:45.072 --- 10.0.0.2 ping statistics --- 00:16:45.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.072 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:16:45.072 00:16:45.072 --- 10.0.0.1 ping statistics --- 00:16:45.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.072 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=267545 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 267545 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 267545 ']' 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.072 22:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.331 [2024-12-16 22:22:34.817924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:45.331 [2024-12-16 22:22:34.817975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.331 [2024-12-16 22:22:34.898754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.331 [2024-12-16 22:22:34.922118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.332 [2024-12-16 22:22:34.922175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.332 [2024-12-16 22:22:34.922182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.332 [2024-12-16 22:22:34.922188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.332 [2024-12-16 22:22:34.922197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.332 [2024-12-16 22:22:34.923570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.332 [2024-12-16 22:22:34.923676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.332 [2024-12-16 22:22:34.923783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.332 [2024-12-16 22:22:34.923784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.332 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.332 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:45.332 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.332 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.332 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.599 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:45.599 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:45.599 "tick_rate": 2100000000, 00:16:45.600 "poll_groups": [ 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_000", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [] 00:16:45.600 }, 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_001", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [] 00:16:45.600 }, 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_002", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [] 00:16:45.600 }, 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_003", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [] 00:16:45.600 } 00:16:45.600 ] 00:16:45.600 }' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 [2024-12-16 22:22:35.180497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:45.600 "tick_rate": 2100000000, 00:16:45.600 "poll_groups": [ 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_000", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [ 00:16:45.600 { 00:16:45.600 "trtype": "TCP" 00:16:45.600 } 00:16:45.600 ] 00:16:45.600 }, 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_001", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [ 00:16:45.600 { 00:16:45.600 "trtype": "TCP" 00:16:45.600 } 00:16:45.600 ] 00:16:45.600 }, 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_002", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [ 00:16:45.600 { 00:16:45.600 "trtype": "TCP" 00:16:45.600 } 00:16:45.600 ] 00:16:45.600 }, 00:16:45.600 { 00:16:45.600 "name": "nvmf_tgt_poll_group_003", 00:16:45.600 "admin_qpairs": 0, 00:16:45.600 "io_qpairs": 0, 00:16:45.600 "current_admin_qpairs": 0, 00:16:45.600 "current_io_qpairs": 0, 00:16:45.600 "pending_bdev_io": 0, 00:16:45.600 "completed_nvme_io": 0, 00:16:45.600 "transports": [ 00:16:45.600 { 00:16:45.600 "trtype": "TCP" 00:16:45.600 } 00:16:45.600 ] 00:16:45.600 } 00:16:45.600 ] 00:16:45.600 }' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 Malloc1 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 [2024-12-16 22:22:35.359565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:45.858 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:45.859 [2024-12-16 22:22:35.388058] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:45.859 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:45.859 could not add new controller: failed to write to nvme-fabrics device 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.859 22:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.233 22:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.233 22:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.233 22:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.233 22:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.233 22:22:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.136 [2024-12-16 22:22:38.796008] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:49.136 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:49.136 could not add new controller: failed to write to nvme-fabrics device 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.136 22:22:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.511 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.511 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:50.511 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.511 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:50.511 22:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.413 22:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 [2024-12-16 22:22:42.057816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.413 22:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.789 22:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.789 22:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:53.789 22:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.789 22:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:53.789 22:22:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.689 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 [2024-12-16 22:22:45.447837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.947 22:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.323 22:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.323 22:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:57.323 22:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.323 22:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:57.323 22:22:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 [2024-12-16 22:22:48.801354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.225 22:22:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.600 22:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.600 22:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:00.600 22:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.600 22:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:00.600 22:22:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 [2024-12-16 22:22:52.171222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.499 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:02.500 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.500 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.500 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.500 22:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.875 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.875 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:03.875 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.875 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:03.875 22:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 [2024-12-16 22:22:55.454271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.775 22:22:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.149 22:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:07.149 22:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:07.149 22:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.149 22:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:07.149 22:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 [2024-12-16 22:22:58.722067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.051 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.310 [2024-12-16 22:22:58.774223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.310 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 [2024-12-16 22:22:58.822344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 [2024-12-16 22:22:58.870512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 [2024-12-16 22:22:58.922688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.311 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:09.311 "tick_rate": 2100000000, 00:17:09.311 "poll_groups": [ 00:17:09.311 { 00:17:09.311 "name": "nvmf_tgt_poll_group_000", 00:17:09.311 "admin_qpairs": 2, 00:17:09.311 "io_qpairs": 168, 00:17:09.311 "current_admin_qpairs": 0, 00:17:09.311 "current_io_qpairs": 0, 00:17:09.311 "pending_bdev_io": 0, 00:17:09.311 "completed_nvme_io": 217, 00:17:09.311 "transports": [ 00:17:09.311 { 00:17:09.311 "trtype": "TCP" 00:17:09.311 } 00:17:09.311 ] 00:17:09.311 }, 00:17:09.311 { 00:17:09.311 "name": "nvmf_tgt_poll_group_001", 00:17:09.311 "admin_qpairs": 2, 00:17:09.311 "io_qpairs": 168, 00:17:09.311 "current_admin_qpairs": 0, 00:17:09.311 "current_io_qpairs": 0, 00:17:09.311 "pending_bdev_io": 0, 00:17:09.311 "completed_nvme_io": 269, 00:17:09.311 "transports": [ 00:17:09.311 { 00:17:09.311 "trtype": "TCP" 00:17:09.311 } 00:17:09.311 ] 00:17:09.311 }, 00:17:09.311 { 00:17:09.311 "name": "nvmf_tgt_poll_group_002", 00:17:09.311 "admin_qpairs": 1, 00:17:09.311 "io_qpairs": 168, 00:17:09.311 "current_admin_qpairs": 0, 00:17:09.311 "current_io_qpairs": 0, 00:17:09.311 "pending_bdev_io": 0, 00:17:09.311 "completed_nvme_io": 217, 00:17:09.312 "transports": [ 00:17:09.312 { 00:17:09.312 "trtype": "TCP" 00:17:09.312 } 00:17:09.312 ] 00:17:09.312 }, 00:17:09.312 { 00:17:09.312 "name": "nvmf_tgt_poll_group_003", 00:17:09.312 "admin_qpairs": 2, 00:17:09.312 "io_qpairs": 168, 00:17:09.312 "current_admin_qpairs": 0, 00:17:09.312 "current_io_qpairs": 0, 00:17:09.312 "pending_bdev_io": 0, 00:17:09.312 "completed_nvme_io": 319, 00:17:09.312 "transports": [ 00:17:09.312 { 00:17:09.312 "trtype": "TCP" 00:17:09.312 } 00:17:09.312 ] 00:17:09.312 } 00:17:09.312 ] 00:17:09.312 }' 00:17:09.312 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:09.312 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:09.312 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:09.312 22:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.571 rmmod nvme_tcp 00:17:09.571 rmmod nvme_fabrics 00:17:09.571 rmmod nvme_keyring 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 267545 ']' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 267545 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 267545 ']' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 267545 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267545 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267545' 00:17:09.571 killing process with pid 267545 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 267545 00:17:09.571 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 267545 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.831 22:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.371 00:17:12.371 real 0m32.747s 00:17:12.371 user 1m39.092s 00:17:12.371 sys 0m6.384s 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.371 ************************************ 00:17:12.371 END TEST nvmf_rpc 00:17:12.371 ************************************ 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.371 ************************************ 00:17:12.371 START TEST nvmf_invalid 00:17:12.371 ************************************ 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:12.371 * Looking for test storage... 00:17:12.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:12.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.371 --rc genhtml_branch_coverage=1 00:17:12.371 --rc genhtml_function_coverage=1 00:17:12.371 --rc genhtml_legend=1 00:17:12.371 --rc geninfo_all_blocks=1 00:17:12.371 --rc geninfo_unexecuted_blocks=1 00:17:12.371 00:17:12.371 ' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:12.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.371 --rc genhtml_branch_coverage=1 00:17:12.371 --rc genhtml_function_coverage=1 00:17:12.371 --rc genhtml_legend=1 00:17:12.371 --rc geninfo_all_blocks=1 00:17:12.371 --rc geninfo_unexecuted_blocks=1 00:17:12.371 00:17:12.371 ' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:12.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.371 --rc genhtml_branch_coverage=1 00:17:12.371 --rc genhtml_function_coverage=1 00:17:12.371 --rc genhtml_legend=1 00:17:12.371 --rc geninfo_all_blocks=1 00:17:12.371 --rc geninfo_unexecuted_blocks=1 00:17:12.371 00:17:12.371 ' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:12.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.371 --rc genhtml_branch_coverage=1 00:17:12.371 --rc genhtml_function_coverage=1 00:17:12.371 --rc genhtml_legend=1 00:17:12.371 --rc geninfo_all_blocks=1 00:17:12.371 --rc geninfo_unexecuted_blocks=1 00:17:12.371 00:17:12.371 ' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.371 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.372 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:17.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.653 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:17.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:17.913 Found net devices under 0000:af:00.0: cvl_0_0 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:17.913 Found net devices under 0000:af:00.1: cvl_0_1 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:17.913 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:17.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:17:17.914 00:17:17.914 --- 10.0.0.2 ping statistics --- 00:17:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.914 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:17.914 00:17:17.914 --- 10.0.0.1 ping statistics --- 00:17:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.914 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.914 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=275192 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 275192 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 275192 ']' 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.173 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:18.173 [2024-12-16 22:23:07.705265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:18.173 [2024-12-16 22:23:07.705318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.173 [2024-12-16 22:23:07.785273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.173 [2024-12-16 22:23:07.808864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.173 [2024-12-16 22:23:07.808903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.173 [2024-12-16 22:23:07.808910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.173 [2024-12-16 22:23:07.808916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.173 [2024-12-16 22:23:07.808921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.173 [2024-12-16 22:23:07.810319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.173 [2024-12-16 22:23:07.810355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.173 [2024-12-16 22:23:07.810459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.173 [2024-12-16 22:23:07.810461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:18.431 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11445 00:17:18.431 [2024-12-16 22:23:08.103648] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:18.689 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:18.689 { 00:17:18.689 "nqn": "nqn.2016-06.io.spdk:cnode11445", 00:17:18.689 "tgt_name": "foobar", 00:17:18.689 "method": "nvmf_create_subsystem", 00:17:18.689 "req_id": 1 00:17:18.689 } 00:17:18.689 Got JSON-RPC error response 00:17:18.689 response: 00:17:18.689 { 00:17:18.689 "code": -32603, 00:17:18.689 "message": "Unable to find target foobar" 00:17:18.689 }' 00:17:18.689 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:18.689 { 00:17:18.689 "nqn": "nqn.2016-06.io.spdk:cnode11445", 00:17:18.689 "tgt_name": "foobar", 00:17:18.689 "method": "nvmf_create_subsystem", 00:17:18.689 "req_id": 1 00:17:18.689 } 00:17:18.689 Got JSON-RPC error response 00:17:18.689 response: 00:17:18.689 { 00:17:18.689 "code": -32603, 00:17:18.690 "message": "Unable to find target foobar" 00:17:18.690 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:18.690 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:18.690 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3931 00:17:18.690 [2024-12-16 22:23:08.304309] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3931: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:18.690 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:18.690 { 00:17:18.690 "nqn": "nqn.2016-06.io.spdk:cnode3931", 00:17:18.690 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:18.690 "method": "nvmf_create_subsystem", 00:17:18.690 "req_id": 1 00:17:18.690 } 00:17:18.690 Got JSON-RPC error response 00:17:18.690 response: 00:17:18.690 { 00:17:18.690 "code": -32602, 00:17:18.690 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:18.690 }' 00:17:18.690 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:18.690 { 00:17:18.690 "nqn": "nqn.2016-06.io.spdk:cnode3931", 00:17:18.690 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:18.690 "method": "nvmf_create_subsystem", 00:17:18.690 "req_id": 1 00:17:18.690 } 00:17:18.690 Got JSON-RPC error response 00:17:18.690 response: 00:17:18.690 { 00:17:18.690 "code": -32602, 00:17:18.690 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:18.690 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:18.690 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:18.690 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12610 00:17:18.948 [2024-12-16 22:23:08.516999] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12610: invalid model number 'SPDK_Controller' 00:17:18.948 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:18.948 { 00:17:18.949 "nqn": "nqn.2016-06.io.spdk:cnode12610", 00:17:18.949 "model_number": "SPDK_Controller\u001f", 00:17:18.949 "method": "nvmf_create_subsystem", 00:17:18.949 "req_id": 1 00:17:18.949 } 00:17:18.949 Got JSON-RPC error response 00:17:18.949 response: 00:17:18.949 { 00:17:18.949 "code": -32602, 00:17:18.949 "message": "Invalid MN SPDK_Controller\u001f" 00:17:18.949 }' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:18.949 { 00:17:18.949 "nqn": "nqn.2016-06.io.spdk:cnode12610", 00:17:18.949 "model_number": "SPDK_Controller\u001f", 00:17:18.949 "method": "nvmf_create_subsystem", 00:17:18.949 "req_id": 1 00:17:18.949 } 00:17:18.949 Got JSON-RPC error response 00:17:18.949 response: 00:17:18.949 { 00:17:18.949 "code": -32602, 00:17:18.949 "message": "Invalid MN SPDK_Controller\u001f" 00:17:18.949 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:18.949 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.208 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ')um*[;P$&Ia_86=kR;3K$' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')um*[;P$&Ia_86=kR;3K$' nqn.2016-06.io.spdk:cnode14936 00:17:19.209 [2024-12-16 22:23:08.862142] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14936: invalid serial number ')um*[;P$&Ia_86=kR;3K$' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:19.209 { 00:17:19.209 "nqn": "nqn.2016-06.io.spdk:cnode14936", 00:17:19.209 "serial_number": ")um*[;P$&Ia_86=kR;3K$", 00:17:19.209 "method": "nvmf_create_subsystem", 00:17:19.209 "req_id": 1 00:17:19.209 } 00:17:19.209 Got JSON-RPC error response 00:17:19.209 response: 00:17:19.209 { 00:17:19.209 "code": -32602, 00:17:19.209 "message": "Invalid SN )um*[;P$&Ia_86=kR;3K$" 00:17:19.209 }' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:19.209 { 00:17:19.209 "nqn": "nqn.2016-06.io.spdk:cnode14936", 00:17:19.209 "serial_number": ")um*[;P$&Ia_86=kR;3K$", 00:17:19.209 "method": "nvmf_create_subsystem", 00:17:19.209 "req_id": 1 00:17:19.209 } 00:17:19.209 Got JSON-RPC error response 00:17:19.209 response: 00:17:19.209 { 00:17:19.209 "code": -32602, 00:17:19.209 "message": "Invalid SN )um*[;P$&Ia_86=kR;3K$" 00:17:19.209 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:19.209 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:19.468 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:19.469 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO' 00:17:19.470 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO' nqn.2016-06.io.spdk:cnode7900 00:17:19.728 [2024-12-16 22:23:09.335676] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7900: invalid model number 'n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO' 00:17:19.728 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:19.728 { 00:17:19.728 "nqn": "nqn.2016-06.io.spdk:cnode7900", 00:17:19.728 "model_number": "n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO", 00:17:19.728 "method": "nvmf_create_subsystem", 00:17:19.728 "req_id": 1 00:17:19.728 } 00:17:19.728 Got JSON-RPC error response 00:17:19.728 response: 00:17:19.728 { 00:17:19.728 "code": -32602, 00:17:19.728 "message": "Invalid MN n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO" 00:17:19.728 }' 00:17:19.728 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:19.728 { 00:17:19.728 "nqn": "nqn.2016-06.io.spdk:cnode7900", 00:17:19.728 "model_number": "n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO", 00:17:19.728 "method": "nvmf_create_subsystem", 00:17:19.728 "req_id": 1 00:17:19.728 } 00:17:19.728 Got JSON-RPC error response 00:17:19.728 response: 00:17:19.728 { 00:17:19.728 "code": -32602, 00:17:19.728 "message": "Invalid MN n(DuJu[M-O2;(MxgOKBv~|)36ou=;g0{?Y8p|jurO" 00:17:19.728 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:19.728 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:19.987 [2024-12-16 22:23:09.532417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.987 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:20.245 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:20.245 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:20.245 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:20.245 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:20.245 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:20.503 [2024-12-16 22:23:09.947017] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:20.503 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:20.503 { 00:17:20.503 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:20.503 "listen_address": { 00:17:20.503 "trtype": "tcp", 00:17:20.503 "traddr": "", 00:17:20.503 "trsvcid": "4421" 00:17:20.503 }, 00:17:20.503 "method": "nvmf_subsystem_remove_listener", 00:17:20.503 "req_id": 1 00:17:20.503 } 00:17:20.503 Got JSON-RPC error response 00:17:20.503 response: 00:17:20.503 { 00:17:20.503 "code": -32602, 00:17:20.503 "message": "Invalid parameters" 00:17:20.503 }' 00:17:20.503 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:20.503 { 00:17:20.503 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:20.503 "listen_address": { 00:17:20.503 "trtype": "tcp", 00:17:20.503 "traddr": "", 00:17:20.503 "trsvcid": "4421" 00:17:20.503 }, 00:17:20.503 "method": "nvmf_subsystem_remove_listener", 00:17:20.503 "req_id": 1 00:17:20.503 } 00:17:20.503 Got JSON-RPC error response 00:17:20.503 response: 00:17:20.503 { 00:17:20.503 "code": -32602, 00:17:20.503 "message": "Invalid parameters" 00:17:20.503 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:20.503 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16311 -i 0 00:17:20.503 [2024-12-16 22:23:10.159694] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16311: invalid cntlid range [0-65519] 00:17:20.503 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:20.503 { 00:17:20.503 "nqn": "nqn.2016-06.io.spdk:cnode16311", 00:17:20.503 "min_cntlid": 0, 00:17:20.503 "method": "nvmf_create_subsystem", 00:17:20.503 "req_id": 1 00:17:20.503 } 00:17:20.503 Got JSON-RPC error response 00:17:20.503 response: 00:17:20.503 { 00:17:20.503 "code": -32602, 00:17:20.503 "message": "Invalid cntlid range [0-65519]" 00:17:20.503 }' 00:17:20.503 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:20.503 { 00:17:20.503 "nqn": "nqn.2016-06.io.spdk:cnode16311", 00:17:20.503 "min_cntlid": 0, 00:17:20.503 "method": "nvmf_create_subsystem", 00:17:20.503 "req_id": 1 00:17:20.503 } 00:17:20.503 Got JSON-RPC error response 00:17:20.503 response: 00:17:20.503 { 00:17:20.503 "code": -32602, 00:17:20.503 "message": "Invalid cntlid range [0-65519]" 00:17:20.503 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:20.503 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13237 -i 65520 00:17:20.762 [2024-12-16 22:23:10.364381] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13237: invalid cntlid range [65520-65519] 00:17:20.762 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:20.762 { 00:17:20.762 "nqn": "nqn.2016-06.io.spdk:cnode13237", 00:17:20.762 "min_cntlid": 65520, 00:17:20.762 "method": "nvmf_create_subsystem", 00:17:20.762 "req_id": 1 00:17:20.762 } 00:17:20.762 Got JSON-RPC error response 00:17:20.762 response: 00:17:20.762 { 00:17:20.762 "code": -32602, 00:17:20.762 "message": "Invalid cntlid range [65520-65519]" 00:17:20.762 }' 00:17:20.762 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:20.762 { 00:17:20.762 "nqn": "nqn.2016-06.io.spdk:cnode13237", 00:17:20.762 "min_cntlid": 65520, 00:17:20.762 "method": "nvmf_create_subsystem", 00:17:20.762 "req_id": 1 00:17:20.762 } 00:17:20.762 Got JSON-RPC error response 00:17:20.762 response: 00:17:20.762 { 00:17:20.762 "code": -32602, 00:17:20.762 "message": "Invalid cntlid range [65520-65519]" 00:17:20.762 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:20.762 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15978 -I 0 00:17:21.020 [2024-12-16 22:23:10.561036] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15978: invalid cntlid range [1-0] 00:17:21.020 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:21.020 { 00:17:21.020 "nqn": "nqn.2016-06.io.spdk:cnode15978", 00:17:21.020 "max_cntlid": 0, 00:17:21.020 "method": "nvmf_create_subsystem", 00:17:21.020 "req_id": 1 00:17:21.020 } 00:17:21.020 Got JSON-RPC error response 00:17:21.020 response: 00:17:21.020 { 00:17:21.020 "code": -32602, 00:17:21.020 "message": "Invalid cntlid range [1-0]" 00:17:21.020 }' 00:17:21.020 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:21.020 { 00:17:21.020 "nqn": "nqn.2016-06.io.spdk:cnode15978", 00:17:21.020 "max_cntlid": 0, 00:17:21.020 "method": "nvmf_create_subsystem", 00:17:21.020 "req_id": 1 00:17:21.020 } 00:17:21.020 Got JSON-RPC error response 00:17:21.020 response: 00:17:21.020 { 00:17:21.020 "code": -32602, 00:17:21.020 "message": "Invalid cntlid range [1-0]" 00:17:21.020 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:21.020 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5667 -I 65520 00:17:21.278 [2024-12-16 22:23:10.753699] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5667: invalid cntlid range [1-65520] 00:17:21.278 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:21.278 { 00:17:21.278 "nqn": "nqn.2016-06.io.spdk:cnode5667", 00:17:21.278 "max_cntlid": 65520, 00:17:21.278 "method": "nvmf_create_subsystem", 00:17:21.278 "req_id": 1 00:17:21.278 } 00:17:21.278 Got JSON-RPC error response 00:17:21.278 response: 00:17:21.278 { 00:17:21.278 "code": -32602, 00:17:21.278 "message": "Invalid cntlid range [1-65520]" 00:17:21.278 }' 00:17:21.278 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:21.278 { 00:17:21.278 "nqn": "nqn.2016-06.io.spdk:cnode5667", 00:17:21.278 "max_cntlid": 65520, 00:17:21.278 "method": "nvmf_create_subsystem", 00:17:21.278 "req_id": 1 00:17:21.278 } 00:17:21.278 Got JSON-RPC error response 00:17:21.278 response: 00:17:21.278 { 00:17:21.278 "code": -32602, 00:17:21.278 "message": "Invalid cntlid range [1-65520]" 00:17:21.278 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:21.278 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5889 -i 6 -I 5 00:17:21.278 [2024-12-16 22:23:10.946370] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5889: invalid cntlid range [6-5] 00:17:21.278 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:21.278 { 00:17:21.278 "nqn": "nqn.2016-06.io.spdk:cnode5889", 00:17:21.278 "min_cntlid": 6, 00:17:21.278 "max_cntlid": 5, 00:17:21.278 "method": "nvmf_create_subsystem", 00:17:21.278 "req_id": 1 00:17:21.278 } 00:17:21.278 Got JSON-RPC error response 00:17:21.278 response: 00:17:21.278 { 00:17:21.278 "code": -32602, 00:17:21.278 "message": "Invalid cntlid range [6-5]" 00:17:21.278 }' 00:17:21.278 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:21.278 { 00:17:21.278 "nqn": "nqn.2016-06.io.spdk:cnode5889", 00:17:21.278 "min_cntlid": 6, 00:17:21.278 "max_cntlid": 5, 00:17:21.278 "method": "nvmf_create_subsystem", 00:17:21.278 "req_id": 1 00:17:21.278 } 00:17:21.278 Got JSON-RPC error response 00:17:21.278 response: 00:17:21.278 { 00:17:21.278 "code": -32602, 00:17:21.279 "message": "Invalid cntlid range [6-5]" 00:17:21.279 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:21.539 22:23:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:21.539 { 00:17:21.539 "name": "foobar", 00:17:21.539 "method": "nvmf_delete_target", 00:17:21.539 "req_id": 1 00:17:21.539 } 00:17:21.539 Got JSON-RPC error response 00:17:21.539 response: 00:17:21.539 { 00:17:21.539 "code": -32602, 00:17:21.539 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:21.539 }' 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:21.539 { 00:17:21.539 "name": "foobar", 00:17:21.539 "method": "nvmf_delete_target", 00:17:21.539 "req_id": 1 00:17:21.539 } 00:17:21.539 Got JSON-RPC error response 00:17:21.539 response: 00:17:21.539 { 00:17:21.539 "code": -32602, 00:17:21.539 "message": "The specified target doesn't exist, cannot delete it." 00:17:21.539 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.539 rmmod nvme_tcp 00:17:21.539 rmmod nvme_fabrics 00:17:21.539 rmmod nvme_keyring 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 275192 ']' 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 275192 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 275192 ']' 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 275192 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275192 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275192' 00:17:21.539 killing process with pid 275192 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 275192 00:17:21.539 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 275192 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.799 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:24.340 00:17:24.340 real 0m11.909s 00:17:24.340 user 0m18.382s 00:17:24.340 sys 0m5.361s 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:24.340 ************************************ 00:17:24.340 END TEST nvmf_invalid 00:17:24.340 ************************************ 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.340 ************************************ 00:17:24.340 START TEST nvmf_connect_stress 00:17:24.340 ************************************ 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:24.340 * Looking for test storage... 00:17:24.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:24.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.340 --rc genhtml_branch_coverage=1 00:17:24.340 --rc genhtml_function_coverage=1 00:17:24.340 --rc genhtml_legend=1 00:17:24.340 --rc geninfo_all_blocks=1 00:17:24.340 --rc geninfo_unexecuted_blocks=1 00:17:24.340 00:17:24.340 ' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:24.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.340 --rc genhtml_branch_coverage=1 00:17:24.340 --rc genhtml_function_coverage=1 00:17:24.340 --rc genhtml_legend=1 00:17:24.340 --rc geninfo_all_blocks=1 00:17:24.340 --rc geninfo_unexecuted_blocks=1 00:17:24.340 00:17:24.340 ' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:24.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.340 --rc genhtml_branch_coverage=1 00:17:24.340 --rc genhtml_function_coverage=1 00:17:24.340 --rc genhtml_legend=1 00:17:24.340 --rc geninfo_all_blocks=1 00:17:24.340 --rc geninfo_unexecuted_blocks=1 00:17:24.340 00:17:24.340 ' 00:17:24.340 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:24.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.340 --rc genhtml_branch_coverage=1 00:17:24.340 --rc genhtml_function_coverage=1 00:17:24.340 --rc genhtml_legend=1 00:17:24.340 --rc geninfo_all_blocks=1 00:17:24.340 --rc geninfo_unexecuted_blocks=1 00:17:24.340 00:17:24.340 ' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:24.341 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:29.622 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:29.622 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:29.622 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:29.623 Found net devices under 0000:af:00.0: cvl_0_0 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:29.623 Found net devices under 0000:af:00.1: cvl_0_1 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:29.623 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:29.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:17:29.883 00:17:29.883 --- 10.0.0.2 ping statistics --- 00:17:29.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.883 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:17:29.883 00:17:29.883 --- 10.0.0.1 ping statistics --- 00:17:29.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.883 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.883 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=279336 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 279336 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 279336 ']' 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.143 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.143 [2024-12-16 22:23:19.653952] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:30.143 [2024-12-16 22:23:19.654000] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.143 [2024-12-16 22:23:19.732062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.143 [2024-12-16 22:23:19.754864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.143 [2024-12-16 22:23:19.754900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.143 [2024-12-16 22:23:19.754909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.143 [2024-12-16 22:23:19.754917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.143 [2024-12-16 22:23:19.754923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.143 [2024-12-16 22:23:19.756302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.143 [2024-12-16 22:23:19.756407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.143 [2024-12-16 22:23:19.756407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.403 [2024-12-16 22:23:19.887949] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.403 [2024-12-16 22:23:19.908174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.403 NULL1 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=279523 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.403 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.404 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.662 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.662 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:30.662 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.662 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.662 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.232 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.232 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:31.232 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.232 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.232 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.497 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.497 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:31.497 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.497 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.497 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.757 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.757 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:31.757 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.757 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.757 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.016 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.016 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:32.016 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.016 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.016 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.276 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.276 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:32.276 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.276 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.276 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.843 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.843 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:32.843 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.843 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.843 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.102 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.102 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:33.102 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.102 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.102 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.361 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.361 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:33.361 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.361 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.361 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.620 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.620 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:33.620 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.620 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.620 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.189 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.189 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:34.189 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.189 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.189 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.448 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.448 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:34.448 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.448 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.448 22:23:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.707 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.707 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:34.707 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.707 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.707 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.966 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.966 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:34.966 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.966 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.966 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.225 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.225 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:35.225 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.225 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.225 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.804 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.804 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:35.804 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.804 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.804 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.063 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.063 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:36.063 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.063 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.063 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.322 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.322 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:36.322 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.322 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.322 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.581 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.581 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:36.581 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.581 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.581 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.840 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.840 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:36.840 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.840 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.840 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.409 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.409 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:37.409 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.409 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.409 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.668 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.668 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:37.668 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.668 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.668 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.927 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.927 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:37.927 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.927 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.927 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.186 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.186 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:38.186 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.186 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.186 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.754 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.754 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:38.754 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.754 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.754 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.013 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.013 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:39.013 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.013 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.013 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.272 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.272 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:39.272 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.272 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.273 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.538 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.538 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:39.538 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.538 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.538 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.797 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.797 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:39.797 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.797 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.797 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.365 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.365 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:40.365 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.365 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.365 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.365 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279523 00:17:40.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (279523) - No such process 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 279523 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:40.623 rmmod nvme_tcp 00:17:40.623 rmmod nvme_fabrics 00:17:40.623 rmmod nvme_keyring 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 279336 ']' 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 279336 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 279336 ']' 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 279336 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279336 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:40.623 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:40.624 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279336' 00:17:40.624 killing process with pid 279336 00:17:40.624 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 279336 00:17:40.624 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 279336 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.883 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.790 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:42.790 00:17:42.790 real 0m18.948s 00:17:42.790 user 0m41.435s 00:17:42.790 sys 0m6.649s 00:17:42.790 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.790 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.790 ************************************ 00:17:42.790 END TEST nvmf_connect_stress 00:17:42.790 ************************************ 00:17:43.050 22:23:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:43.050 22:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.050 22:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.051 ************************************ 00:17:43.051 START TEST nvmf_fused_ordering 00:17:43.051 ************************************ 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:43.051 * Looking for test storage... 00:17:43.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.051 --rc genhtml_branch_coverage=1 00:17:43.051 --rc genhtml_function_coverage=1 00:17:43.051 --rc genhtml_legend=1 00:17:43.051 --rc geninfo_all_blocks=1 00:17:43.051 --rc geninfo_unexecuted_blocks=1 00:17:43.051 00:17:43.051 ' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.051 --rc genhtml_branch_coverage=1 00:17:43.051 --rc genhtml_function_coverage=1 00:17:43.051 --rc genhtml_legend=1 00:17:43.051 --rc geninfo_all_blocks=1 00:17:43.051 --rc geninfo_unexecuted_blocks=1 00:17:43.051 00:17:43.051 ' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.051 --rc genhtml_branch_coverage=1 00:17:43.051 --rc genhtml_function_coverage=1 00:17:43.051 --rc genhtml_legend=1 00:17:43.051 --rc geninfo_all_blocks=1 00:17:43.051 --rc geninfo_unexecuted_blocks=1 00:17:43.051 00:17:43.051 ' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:43.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.051 --rc genhtml_branch_coverage=1 00:17:43.051 --rc genhtml_function_coverage=1 00:17:43.051 --rc genhtml_legend=1 00:17:43.051 --rc geninfo_all_blocks=1 00:17:43.051 --rc geninfo_unexecuted_blocks=1 00:17:43.051 00:17:43.051 ' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.051 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.052 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:49.628 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:49.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:49.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:49.629 Found net devices under 0000:af:00.0: cvl_0_0 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:49.629 Found net devices under 0000:af:00.1: cvl_0_1 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:49.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:17:49.629 00:17:49.629 --- 10.0.0.2 ping statistics --- 00:17:49.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.629 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:17:49.629 00:17:49.629 --- 10.0.0.1 ping statistics --- 00:17:49.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.629 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=284578 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 284578 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 284578 ']' 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.629 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.629 [2024-12-16 22:23:38.707973] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:49.629 [2024-12-16 22:23:38.708021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.629 [2024-12-16 22:23:38.785334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.629 [2024-12-16 22:23:38.806866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.629 [2024-12-16 22:23:38.806900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.629 [2024-12-16 22:23:38.806909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.629 [2024-12-16 22:23:38.806917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.630 [2024-12-16 22:23:38.806927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.630 [2024-12-16 22:23:38.807445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 [2024-12-16 22:23:38.937966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 [2024-12-16 22:23:38.962157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 NULL1 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.630 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:49.630 [2024-12-16 22:23:39.023789] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:49.630 [2024-12-16 22:23:39.023834] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284602 ] 00:17:49.889 Attached to nqn.2016-06.io.spdk:cnode1 00:17:49.889 Namespace ID: 1 size: 1GB 00:17:49.889 fused_ordering(0) 00:17:49.889 fused_ordering(1) 00:17:49.890 fused_ordering(2) 00:17:49.890 fused_ordering(3) 00:17:49.890 fused_ordering(4) 00:17:49.890 fused_ordering(5) 00:17:49.890 fused_ordering(6) 00:17:49.890 fused_ordering(7) 00:17:49.890 fused_ordering(8) 00:17:49.890 fused_ordering(9) 00:17:49.890 fused_ordering(10) 00:17:49.890 fused_ordering(11) 00:17:49.890 fused_ordering(12) 00:17:49.890 fused_ordering(13) 00:17:49.890 fused_ordering(14) 00:17:49.890 fused_ordering(15) 00:17:49.890 fused_ordering(16) 00:17:49.890 fused_ordering(17) 00:17:49.890 fused_ordering(18) 00:17:49.890 fused_ordering(19) 00:17:49.890 fused_ordering(20) 00:17:49.890 fused_ordering(21) 00:17:49.890 fused_ordering(22) 00:17:49.890 fused_ordering(23) 00:17:49.890 fused_ordering(24) 00:17:49.890 fused_ordering(25) 00:17:49.890 fused_ordering(26) 00:17:49.890 fused_ordering(27) 00:17:49.890 fused_ordering(28) 00:17:49.890 fused_ordering(29) 00:17:49.890 fused_ordering(30) 00:17:49.890 fused_ordering(31) 00:17:49.890 fused_ordering(32) 00:17:49.890 fused_ordering(33) 00:17:49.890 fused_ordering(34) 00:17:49.890 fused_ordering(35) 00:17:49.890 fused_ordering(36) 00:17:49.890 fused_ordering(37) 00:17:49.890 fused_ordering(38) 00:17:49.890 fused_ordering(39) 00:17:49.890 fused_ordering(40) 00:17:49.890 fused_ordering(41) 00:17:49.890 fused_ordering(42) 00:17:49.890 fused_ordering(43) 00:17:49.890 fused_ordering(44) 00:17:49.890 fused_ordering(45) 00:17:49.890 fused_ordering(46) 00:17:49.890 fused_ordering(47) 00:17:49.890 fused_ordering(48) 00:17:49.890 fused_ordering(49) 00:17:49.890 fused_ordering(50) 00:17:49.890 fused_ordering(51) 00:17:49.890 fused_ordering(52) 00:17:49.890 fused_ordering(53) 00:17:49.890 fused_ordering(54) 00:17:49.890 fused_ordering(55) 00:17:49.890 fused_ordering(56) 00:17:49.890 fused_ordering(57) 00:17:49.890 fused_ordering(58) 00:17:49.890 fused_ordering(59) 00:17:49.890 fused_ordering(60) 00:17:49.890 fused_ordering(61) 00:17:49.890 fused_ordering(62) 00:17:49.890 fused_ordering(63) 00:17:49.890 fused_ordering(64) 00:17:49.890 fused_ordering(65) 00:17:49.890 fused_ordering(66) 00:17:49.890 fused_ordering(67) 00:17:49.890 fused_ordering(68) 00:17:49.890 fused_ordering(69) 00:17:49.890 fused_ordering(70) 00:17:49.890 fused_ordering(71) 00:17:49.890 fused_ordering(72) 00:17:49.890 fused_ordering(73) 00:17:49.890 fused_ordering(74) 00:17:49.890 fused_ordering(75) 00:17:49.890 fused_ordering(76) 00:17:49.890 fused_ordering(77) 00:17:49.890 fused_ordering(78) 00:17:49.890 fused_ordering(79) 00:17:49.890 fused_ordering(80) 00:17:49.890 fused_ordering(81) 00:17:49.890 fused_ordering(82) 00:17:49.890 fused_ordering(83) 00:17:49.890 fused_ordering(84) 00:17:49.890 fused_ordering(85) 00:17:49.890 fused_ordering(86) 00:17:49.890 fused_ordering(87) 00:17:49.890 fused_ordering(88) 00:17:49.890 fused_ordering(89) 00:17:49.890 fused_ordering(90) 00:17:49.890 fused_ordering(91) 00:17:49.890 fused_ordering(92) 00:17:49.890 fused_ordering(93) 00:17:49.890 fused_ordering(94) 00:17:49.890 fused_ordering(95) 00:17:49.890 fused_ordering(96) 00:17:49.890 fused_ordering(97) 00:17:49.890 fused_ordering(98) 00:17:49.890 fused_ordering(99) 00:17:49.890 fused_ordering(100) 00:17:49.890 fused_ordering(101) 00:17:49.890 fused_ordering(102) 00:17:49.890 fused_ordering(103) 00:17:49.890 fused_ordering(104) 00:17:49.890 fused_ordering(105) 00:17:49.890 fused_ordering(106) 00:17:49.890 fused_ordering(107) 00:17:49.890 fused_ordering(108) 00:17:49.890 fused_ordering(109) 00:17:49.890 fused_ordering(110) 00:17:49.890 fused_ordering(111) 00:17:49.890 fused_ordering(112) 00:17:49.890 fused_ordering(113) 00:17:49.890 fused_ordering(114) 00:17:49.890 fused_ordering(115) 00:17:49.890 fused_ordering(116) 00:17:49.890 fused_ordering(117) 00:17:49.890 fused_ordering(118) 00:17:49.890 fused_ordering(119) 00:17:49.890 fused_ordering(120) 00:17:49.890 fused_ordering(121) 00:17:49.890 fused_ordering(122) 00:17:49.890 fused_ordering(123) 00:17:49.890 fused_ordering(124) 00:17:49.890 fused_ordering(125) 00:17:49.890 fused_ordering(126) 00:17:49.890 fused_ordering(127) 00:17:49.890 fused_ordering(128) 00:17:49.890 fused_ordering(129) 00:17:49.890 fused_ordering(130) 00:17:49.890 fused_ordering(131) 00:17:49.890 fused_ordering(132) 00:17:49.890 fused_ordering(133) 00:17:49.890 fused_ordering(134) 00:17:49.890 fused_ordering(135) 00:17:49.890 fused_ordering(136) 00:17:49.890 fused_ordering(137) 00:17:49.890 fused_ordering(138) 00:17:49.890 fused_ordering(139) 00:17:49.890 fused_ordering(140) 00:17:49.890 fused_ordering(141) 00:17:49.890 fused_ordering(142) 00:17:49.890 fused_ordering(143) 00:17:49.890 fused_ordering(144) 00:17:49.890 fused_ordering(145) 00:17:49.890 fused_ordering(146) 00:17:49.890 fused_ordering(147) 00:17:49.890 fused_ordering(148) 00:17:49.890 fused_ordering(149) 00:17:49.890 fused_ordering(150) 00:17:49.890 fused_ordering(151) 00:17:49.890 fused_ordering(152) 00:17:49.890 fused_ordering(153) 00:17:49.890 fused_ordering(154) 00:17:49.890 fused_ordering(155) 00:17:49.890 fused_ordering(156) 00:17:49.890 fused_ordering(157) 00:17:49.890 fused_ordering(158) 00:17:49.890 fused_ordering(159) 00:17:49.890 fused_ordering(160) 00:17:49.890 fused_ordering(161) 00:17:49.890 fused_ordering(162) 00:17:49.890 fused_ordering(163) 00:17:49.890 fused_ordering(164) 00:17:49.890 fused_ordering(165) 00:17:49.890 fused_ordering(166) 00:17:49.890 fused_ordering(167) 00:17:49.890 fused_ordering(168) 00:17:49.890 fused_ordering(169) 00:17:49.890 fused_ordering(170) 00:17:49.890 fused_ordering(171) 00:17:49.890 fused_ordering(172) 00:17:49.890 fused_ordering(173) 00:17:49.890 fused_ordering(174) 00:17:49.890 fused_ordering(175) 00:17:49.890 fused_ordering(176) 00:17:49.890 fused_ordering(177) 00:17:49.890 fused_ordering(178) 00:17:49.890 fused_ordering(179) 00:17:49.890 fused_ordering(180) 00:17:49.890 fused_ordering(181) 00:17:49.890 fused_ordering(182) 00:17:49.890 fused_ordering(183) 00:17:49.890 fused_ordering(184) 00:17:49.890 fused_ordering(185) 00:17:49.890 fused_ordering(186) 00:17:49.890 fused_ordering(187) 00:17:49.890 fused_ordering(188) 00:17:49.890 fused_ordering(189) 00:17:49.890 fused_ordering(190) 00:17:49.890 fused_ordering(191) 00:17:49.890 fused_ordering(192) 00:17:49.890 fused_ordering(193) 00:17:49.890 fused_ordering(194) 00:17:49.890 fused_ordering(195) 00:17:49.890 fused_ordering(196) 00:17:49.890 fused_ordering(197) 00:17:49.890 fused_ordering(198) 00:17:49.890 fused_ordering(199) 00:17:49.890 fused_ordering(200) 00:17:49.890 fused_ordering(201) 00:17:49.890 fused_ordering(202) 00:17:49.890 fused_ordering(203) 00:17:49.890 fused_ordering(204) 00:17:49.890 fused_ordering(205) 00:17:50.150 fused_ordering(206) 00:17:50.150 fused_ordering(207) 00:17:50.150 fused_ordering(208) 00:17:50.150 fused_ordering(209) 00:17:50.150 fused_ordering(210) 00:17:50.150 fused_ordering(211) 00:17:50.150 fused_ordering(212) 00:17:50.150 fused_ordering(213) 00:17:50.150 fused_ordering(214) 00:17:50.150 fused_ordering(215) 00:17:50.150 fused_ordering(216) 00:17:50.150 fused_ordering(217) 00:17:50.150 fused_ordering(218) 00:17:50.150 fused_ordering(219) 00:17:50.150 fused_ordering(220) 00:17:50.150 fused_ordering(221) 00:17:50.150 fused_ordering(222) 00:17:50.150 fused_ordering(223) 00:17:50.150 fused_ordering(224) 00:17:50.150 fused_ordering(225) 00:17:50.150 fused_ordering(226) 00:17:50.150 fused_ordering(227) 00:17:50.150 fused_ordering(228) 00:17:50.150 fused_ordering(229) 00:17:50.150 fused_ordering(230) 00:17:50.150 fused_ordering(231) 00:17:50.150 fused_ordering(232) 00:17:50.150 fused_ordering(233) 00:17:50.150 fused_ordering(234) 00:17:50.150 fused_ordering(235) 00:17:50.150 fused_ordering(236) 00:17:50.150 fused_ordering(237) 00:17:50.150 fused_ordering(238) 00:17:50.150 fused_ordering(239) 00:17:50.150 fused_ordering(240) 00:17:50.150 fused_ordering(241) 00:17:50.150 fused_ordering(242) 00:17:50.150 fused_ordering(243) 00:17:50.150 fused_ordering(244) 00:17:50.150 fused_ordering(245) 00:17:50.150 fused_ordering(246) 00:17:50.150 fused_ordering(247) 00:17:50.150 fused_ordering(248) 00:17:50.150 fused_ordering(249) 00:17:50.150 fused_ordering(250) 00:17:50.150 fused_ordering(251) 00:17:50.150 fused_ordering(252) 00:17:50.150 fused_ordering(253) 00:17:50.150 fused_ordering(254) 00:17:50.150 fused_ordering(255) 00:17:50.150 fused_ordering(256) 00:17:50.150 fused_ordering(257) 00:17:50.150 fused_ordering(258) 00:17:50.150 fused_ordering(259) 00:17:50.150 fused_ordering(260) 00:17:50.150 fused_ordering(261) 00:17:50.150 fused_ordering(262) 00:17:50.150 fused_ordering(263) 00:17:50.150 fused_ordering(264) 00:17:50.150 fused_ordering(265) 00:17:50.150 fused_ordering(266) 00:17:50.150 fused_ordering(267) 00:17:50.150 fused_ordering(268) 00:17:50.150 fused_ordering(269) 00:17:50.150 fused_ordering(270) 00:17:50.150 fused_ordering(271) 00:17:50.150 fused_ordering(272) 00:17:50.150 fused_ordering(273) 00:17:50.150 fused_ordering(274) 00:17:50.150 fused_ordering(275) 00:17:50.150 fused_ordering(276) 00:17:50.150 fused_ordering(277) 00:17:50.150 fused_ordering(278) 00:17:50.150 fused_ordering(279) 00:17:50.150 fused_ordering(280) 00:17:50.150 fused_ordering(281) 00:17:50.150 fused_ordering(282) 00:17:50.150 fused_ordering(283) 00:17:50.150 fused_ordering(284) 00:17:50.150 fused_ordering(285) 00:17:50.150 fused_ordering(286) 00:17:50.150 fused_ordering(287) 00:17:50.150 fused_ordering(288) 00:17:50.150 fused_ordering(289) 00:17:50.150 fused_ordering(290) 00:17:50.150 fused_ordering(291) 00:17:50.150 fused_ordering(292) 00:17:50.150 fused_ordering(293) 00:17:50.150 fused_ordering(294) 00:17:50.150 fused_ordering(295) 00:17:50.150 fused_ordering(296) 00:17:50.150 fused_ordering(297) 00:17:50.150 fused_ordering(298) 00:17:50.150 fused_ordering(299) 00:17:50.150 fused_ordering(300) 00:17:50.150 fused_ordering(301) 00:17:50.150 fused_ordering(302) 00:17:50.150 fused_ordering(303) 00:17:50.150 fused_ordering(304) 00:17:50.150 fused_ordering(305) 00:17:50.150 fused_ordering(306) 00:17:50.150 fused_ordering(307) 00:17:50.150 fused_ordering(308) 00:17:50.150 fused_ordering(309) 00:17:50.150 fused_ordering(310) 00:17:50.150 fused_ordering(311) 00:17:50.150 fused_ordering(312) 00:17:50.150 fused_ordering(313) 00:17:50.150 fused_ordering(314) 00:17:50.150 fused_ordering(315) 00:17:50.150 fused_ordering(316) 00:17:50.150 fused_ordering(317) 00:17:50.150 fused_ordering(318) 00:17:50.150 fused_ordering(319) 00:17:50.150 fused_ordering(320) 00:17:50.150 fused_ordering(321) 00:17:50.150 fused_ordering(322) 00:17:50.150 fused_ordering(323) 00:17:50.150 fused_ordering(324) 00:17:50.150 fused_ordering(325) 00:17:50.150 fused_ordering(326) 00:17:50.150 fused_ordering(327) 00:17:50.150 fused_ordering(328) 00:17:50.150 fused_ordering(329) 00:17:50.150 fused_ordering(330) 00:17:50.150 fused_ordering(331) 00:17:50.150 fused_ordering(332) 00:17:50.150 fused_ordering(333) 00:17:50.150 fused_ordering(334) 00:17:50.150 fused_ordering(335) 00:17:50.150 fused_ordering(336) 00:17:50.150 fused_ordering(337) 00:17:50.150 fused_ordering(338) 00:17:50.150 fused_ordering(339) 00:17:50.150 fused_ordering(340) 00:17:50.150 fused_ordering(341) 00:17:50.150 fused_ordering(342) 00:17:50.150 fused_ordering(343) 00:17:50.150 fused_ordering(344) 00:17:50.150 fused_ordering(345) 00:17:50.150 fused_ordering(346) 00:17:50.150 fused_ordering(347) 00:17:50.150 fused_ordering(348) 00:17:50.150 fused_ordering(349) 00:17:50.150 fused_ordering(350) 00:17:50.150 fused_ordering(351) 00:17:50.150 fused_ordering(352) 00:17:50.150 fused_ordering(353) 00:17:50.150 fused_ordering(354) 00:17:50.150 fused_ordering(355) 00:17:50.150 fused_ordering(356) 00:17:50.150 fused_ordering(357) 00:17:50.150 fused_ordering(358) 00:17:50.150 fused_ordering(359) 00:17:50.150 fused_ordering(360) 00:17:50.150 fused_ordering(361) 00:17:50.150 fused_ordering(362) 00:17:50.150 fused_ordering(363) 00:17:50.150 fused_ordering(364) 00:17:50.150 fused_ordering(365) 00:17:50.150 fused_ordering(366) 00:17:50.150 fused_ordering(367) 00:17:50.150 fused_ordering(368) 00:17:50.150 fused_ordering(369) 00:17:50.150 fused_ordering(370) 00:17:50.150 fused_ordering(371) 00:17:50.150 fused_ordering(372) 00:17:50.150 fused_ordering(373) 00:17:50.150 fused_ordering(374) 00:17:50.150 fused_ordering(375) 00:17:50.150 fused_ordering(376) 00:17:50.150 fused_ordering(377) 00:17:50.150 fused_ordering(378) 00:17:50.150 fused_ordering(379) 00:17:50.150 fused_ordering(380) 00:17:50.150 fused_ordering(381) 00:17:50.150 fused_ordering(382) 00:17:50.150 fused_ordering(383) 00:17:50.150 fused_ordering(384) 00:17:50.150 fused_ordering(385) 00:17:50.150 fused_ordering(386) 00:17:50.150 fused_ordering(387) 00:17:50.150 fused_ordering(388) 00:17:50.150 fused_ordering(389) 00:17:50.150 fused_ordering(390) 00:17:50.150 fused_ordering(391) 00:17:50.150 fused_ordering(392) 00:17:50.150 fused_ordering(393) 00:17:50.150 fused_ordering(394) 00:17:50.150 fused_ordering(395) 00:17:50.150 fused_ordering(396) 00:17:50.150 fused_ordering(397) 00:17:50.150 fused_ordering(398) 00:17:50.150 fused_ordering(399) 00:17:50.150 fused_ordering(400) 00:17:50.150 fused_ordering(401) 00:17:50.150 fused_ordering(402) 00:17:50.150 fused_ordering(403) 00:17:50.150 fused_ordering(404) 00:17:50.150 fused_ordering(405) 00:17:50.150 fused_ordering(406) 00:17:50.150 fused_ordering(407) 00:17:50.150 fused_ordering(408) 00:17:50.150 fused_ordering(409) 00:17:50.150 fused_ordering(410) 00:17:50.410 fused_ordering(411) 00:17:50.410 fused_ordering(412) 00:17:50.410 fused_ordering(413) 00:17:50.410 fused_ordering(414) 00:17:50.410 fused_ordering(415) 00:17:50.410 fused_ordering(416) 00:17:50.410 fused_ordering(417) 00:17:50.410 fused_ordering(418) 00:17:50.410 fused_ordering(419) 00:17:50.410 fused_ordering(420) 00:17:50.410 fused_ordering(421) 00:17:50.410 fused_ordering(422) 00:17:50.410 fused_ordering(423) 00:17:50.410 fused_ordering(424) 00:17:50.410 fused_ordering(425) 00:17:50.410 fused_ordering(426) 00:17:50.410 fused_ordering(427) 00:17:50.410 fused_ordering(428) 00:17:50.410 fused_ordering(429) 00:17:50.410 fused_ordering(430) 00:17:50.410 fused_ordering(431) 00:17:50.410 fused_ordering(432) 00:17:50.410 fused_ordering(433) 00:17:50.410 fused_ordering(434) 00:17:50.410 fused_ordering(435) 00:17:50.410 fused_ordering(436) 00:17:50.410 fused_ordering(437) 00:17:50.410 fused_ordering(438) 00:17:50.410 fused_ordering(439) 00:17:50.410 fused_ordering(440) 00:17:50.410 fused_ordering(441) 00:17:50.410 fused_ordering(442) 00:17:50.410 fused_ordering(443) 00:17:50.410 fused_ordering(444) 00:17:50.410 fused_ordering(445) 00:17:50.410 fused_ordering(446) 00:17:50.410 fused_ordering(447) 00:17:50.410 fused_ordering(448) 00:17:50.410 fused_ordering(449) 00:17:50.410 fused_ordering(450) 00:17:50.410 fused_ordering(451) 00:17:50.410 fused_ordering(452) 00:17:50.410 fused_ordering(453) 00:17:50.410 fused_ordering(454) 00:17:50.410 fused_ordering(455) 00:17:50.410 fused_ordering(456) 00:17:50.410 fused_ordering(457) 00:17:50.410 fused_ordering(458) 00:17:50.410 fused_ordering(459) 00:17:50.410 fused_ordering(460) 00:17:50.410 fused_ordering(461) 00:17:50.410 fused_ordering(462) 00:17:50.410 fused_ordering(463) 00:17:50.410 fused_ordering(464) 00:17:50.410 fused_ordering(465) 00:17:50.410 fused_ordering(466) 00:17:50.410 fused_ordering(467) 00:17:50.410 fused_ordering(468) 00:17:50.410 fused_ordering(469) 00:17:50.410 fused_ordering(470) 00:17:50.410 fused_ordering(471) 00:17:50.410 fused_ordering(472) 00:17:50.410 fused_ordering(473) 00:17:50.410 fused_ordering(474) 00:17:50.410 fused_ordering(475) 00:17:50.410 fused_ordering(476) 00:17:50.410 fused_ordering(477) 00:17:50.410 fused_ordering(478) 00:17:50.410 fused_ordering(479) 00:17:50.410 fused_ordering(480) 00:17:50.410 fused_ordering(481) 00:17:50.410 fused_ordering(482) 00:17:50.410 fused_ordering(483) 00:17:50.410 fused_ordering(484) 00:17:50.410 fused_ordering(485) 00:17:50.410 fused_ordering(486) 00:17:50.410 fused_ordering(487) 00:17:50.410 fused_ordering(488) 00:17:50.410 fused_ordering(489) 00:17:50.410 fused_ordering(490) 00:17:50.410 fused_ordering(491) 00:17:50.410 fused_ordering(492) 00:17:50.410 fused_ordering(493) 00:17:50.410 fused_ordering(494) 00:17:50.410 fused_ordering(495) 00:17:50.411 fused_ordering(496) 00:17:50.411 fused_ordering(497) 00:17:50.411 fused_ordering(498) 00:17:50.411 fused_ordering(499) 00:17:50.411 fused_ordering(500) 00:17:50.411 fused_ordering(501) 00:17:50.411 fused_ordering(502) 00:17:50.411 fused_ordering(503) 00:17:50.411 fused_ordering(504) 00:17:50.411 fused_ordering(505) 00:17:50.411 fused_ordering(506) 00:17:50.411 fused_ordering(507) 00:17:50.411 fused_ordering(508) 00:17:50.411 fused_ordering(509) 00:17:50.411 fused_ordering(510) 00:17:50.411 fused_ordering(511) 00:17:50.411 fused_ordering(512) 00:17:50.411 fused_ordering(513) 00:17:50.411 fused_ordering(514) 00:17:50.411 fused_ordering(515) 00:17:50.411 fused_ordering(516) 00:17:50.411 fused_ordering(517) 00:17:50.411 fused_ordering(518) 00:17:50.411 fused_ordering(519) 00:17:50.411 fused_ordering(520) 00:17:50.411 fused_ordering(521) 00:17:50.411 fused_ordering(522) 00:17:50.411 fused_ordering(523) 00:17:50.411 fused_ordering(524) 00:17:50.411 fused_ordering(525) 00:17:50.411 fused_ordering(526) 00:17:50.411 fused_ordering(527) 00:17:50.411 fused_ordering(528) 00:17:50.411 fused_ordering(529) 00:17:50.411 fused_ordering(530) 00:17:50.411 fused_ordering(531) 00:17:50.411 fused_ordering(532) 00:17:50.411 fused_ordering(533) 00:17:50.411 fused_ordering(534) 00:17:50.411 fused_ordering(535) 00:17:50.411 fused_ordering(536) 00:17:50.411 fused_ordering(537) 00:17:50.411 fused_ordering(538) 00:17:50.411 fused_ordering(539) 00:17:50.411 fused_ordering(540) 00:17:50.411 fused_ordering(541) 00:17:50.411 fused_ordering(542) 00:17:50.411 fused_ordering(543) 00:17:50.411 fused_ordering(544) 00:17:50.411 fused_ordering(545) 00:17:50.411 fused_ordering(546) 00:17:50.411 fused_ordering(547) 00:17:50.411 fused_ordering(548) 00:17:50.411 fused_ordering(549) 00:17:50.411 fused_ordering(550) 00:17:50.411 fused_ordering(551) 00:17:50.411 fused_ordering(552) 00:17:50.411 fused_ordering(553) 00:17:50.411 fused_ordering(554) 00:17:50.411 fused_ordering(555) 00:17:50.411 fused_ordering(556) 00:17:50.411 fused_ordering(557) 00:17:50.411 fused_ordering(558) 00:17:50.411 fused_ordering(559) 00:17:50.411 fused_ordering(560) 00:17:50.411 fused_ordering(561) 00:17:50.411 fused_ordering(562) 00:17:50.411 fused_ordering(563) 00:17:50.411 fused_ordering(564) 00:17:50.411 fused_ordering(565) 00:17:50.411 fused_ordering(566) 00:17:50.411 fused_ordering(567) 00:17:50.411 fused_ordering(568) 00:17:50.411 fused_ordering(569) 00:17:50.411 fused_ordering(570) 00:17:50.411 fused_ordering(571) 00:17:50.411 fused_ordering(572) 00:17:50.411 fused_ordering(573) 00:17:50.411 fused_ordering(574) 00:17:50.411 fused_ordering(575) 00:17:50.411 fused_ordering(576) 00:17:50.411 fused_ordering(577) 00:17:50.411 fused_ordering(578) 00:17:50.411 fused_ordering(579) 00:17:50.411 fused_ordering(580) 00:17:50.411 fused_ordering(581) 00:17:50.411 fused_ordering(582) 00:17:50.411 fused_ordering(583) 00:17:50.411 fused_ordering(584) 00:17:50.411 fused_ordering(585) 00:17:50.411 fused_ordering(586) 00:17:50.411 fused_ordering(587) 00:17:50.411 fused_ordering(588) 00:17:50.411 fused_ordering(589) 00:17:50.411 fused_ordering(590) 00:17:50.411 fused_ordering(591) 00:17:50.411 fused_ordering(592) 00:17:50.411 fused_ordering(593) 00:17:50.411 fused_ordering(594) 00:17:50.411 fused_ordering(595) 00:17:50.411 fused_ordering(596) 00:17:50.411 fused_ordering(597) 00:17:50.411 fused_ordering(598) 00:17:50.411 fused_ordering(599) 00:17:50.411 fused_ordering(600) 00:17:50.411 fused_ordering(601) 00:17:50.411 fused_ordering(602) 00:17:50.411 fused_ordering(603) 00:17:50.411 fused_ordering(604) 00:17:50.411 fused_ordering(605) 00:17:50.411 fused_ordering(606) 00:17:50.411 fused_ordering(607) 00:17:50.411 fused_ordering(608) 00:17:50.411 fused_ordering(609) 00:17:50.411 fused_ordering(610) 00:17:50.411 fused_ordering(611) 00:17:50.411 fused_ordering(612) 00:17:50.411 fused_ordering(613) 00:17:50.411 fused_ordering(614) 00:17:50.411 fused_ordering(615) 00:17:50.670 fused_ordering(616) 00:17:50.670 fused_ordering(617) 00:17:50.670 fused_ordering(618) 00:17:50.670 fused_ordering(619) 00:17:50.670 fused_ordering(620) 00:17:50.670 fused_ordering(621) 00:17:50.670 fused_ordering(622) 00:17:50.670 fused_ordering(623) 00:17:50.670 fused_ordering(624) 00:17:50.670 fused_ordering(625) 00:17:50.670 fused_ordering(626) 00:17:50.670 fused_ordering(627) 00:17:50.670 fused_ordering(628) 00:17:50.670 fused_ordering(629) 00:17:50.670 fused_ordering(630) 00:17:50.670 fused_ordering(631) 00:17:50.670 fused_ordering(632) 00:17:50.670 fused_ordering(633) 00:17:50.670 fused_ordering(634) 00:17:50.670 fused_ordering(635) 00:17:50.670 fused_ordering(636) 00:17:50.670 fused_ordering(637) 00:17:50.671 fused_ordering(638) 00:17:50.671 fused_ordering(639) 00:17:50.671 fused_ordering(640) 00:17:50.671 fused_ordering(641) 00:17:50.671 fused_ordering(642) 00:17:50.671 fused_ordering(643) 00:17:50.671 fused_ordering(644) 00:17:50.671 fused_ordering(645) 00:17:50.671 fused_ordering(646) 00:17:50.671 fused_ordering(647) 00:17:50.671 fused_ordering(648) 00:17:50.671 fused_ordering(649) 00:17:50.671 fused_ordering(650) 00:17:50.671 fused_ordering(651) 00:17:50.671 fused_ordering(652) 00:17:50.671 fused_ordering(653) 00:17:50.671 fused_ordering(654) 00:17:50.671 fused_ordering(655) 00:17:50.671 fused_ordering(656) 00:17:50.671 fused_ordering(657) 00:17:50.671 fused_ordering(658) 00:17:50.671 fused_ordering(659) 00:17:50.671 fused_ordering(660) 00:17:50.671 fused_ordering(661) 00:17:50.671 fused_ordering(662) 00:17:50.671 fused_ordering(663) 00:17:50.671 fused_ordering(664) 00:17:50.671 fused_ordering(665) 00:17:50.671 fused_ordering(666) 00:17:50.671 fused_ordering(667) 00:17:50.671 fused_ordering(668) 00:17:50.671 fused_ordering(669) 00:17:50.671 fused_ordering(670) 00:17:50.671 fused_ordering(671) 00:17:50.671 fused_ordering(672) 00:17:50.671 fused_ordering(673) 00:17:50.671 fused_ordering(674) 00:17:50.671 fused_ordering(675) 00:17:50.671 fused_ordering(676) 00:17:50.671 fused_ordering(677) 00:17:50.671 fused_ordering(678) 00:17:50.671 fused_ordering(679) 00:17:50.671 fused_ordering(680) 00:17:50.671 fused_ordering(681) 00:17:50.671 fused_ordering(682) 00:17:50.671 fused_ordering(683) 00:17:50.671 fused_ordering(684) 00:17:50.671 fused_ordering(685) 00:17:50.671 fused_ordering(686) 00:17:50.671 fused_ordering(687) 00:17:50.671 fused_ordering(688) 00:17:50.671 fused_ordering(689) 00:17:50.671 fused_ordering(690) 00:17:50.671 fused_ordering(691) 00:17:50.671 fused_ordering(692) 00:17:50.671 fused_ordering(693) 00:17:50.671 fused_ordering(694) 00:17:50.671 fused_ordering(695) 00:17:50.671 fused_ordering(696) 00:17:50.671 fused_ordering(697) 00:17:50.671 fused_ordering(698) 00:17:50.671 fused_ordering(699) 00:17:50.671 fused_ordering(700) 00:17:50.671 fused_ordering(701) 00:17:50.671 fused_ordering(702) 00:17:50.671 fused_ordering(703) 00:17:50.671 fused_ordering(704) 00:17:50.671 fused_ordering(705) 00:17:50.671 fused_ordering(706) 00:17:50.671 fused_ordering(707) 00:17:50.671 fused_ordering(708) 00:17:50.671 fused_ordering(709) 00:17:50.671 fused_ordering(710) 00:17:50.671 fused_ordering(711) 00:17:50.671 fused_ordering(712) 00:17:50.671 fused_ordering(713) 00:17:50.671 fused_ordering(714) 00:17:50.671 fused_ordering(715) 00:17:50.671 fused_ordering(716) 00:17:50.671 fused_ordering(717) 00:17:50.671 fused_ordering(718) 00:17:50.671 fused_ordering(719) 00:17:50.671 fused_ordering(720) 00:17:50.671 fused_ordering(721) 00:17:50.671 fused_ordering(722) 00:17:50.671 fused_ordering(723) 00:17:50.671 fused_ordering(724) 00:17:50.671 fused_ordering(725) 00:17:50.671 fused_ordering(726) 00:17:50.671 fused_ordering(727) 00:17:50.671 fused_ordering(728) 00:17:50.671 fused_ordering(729) 00:17:50.671 fused_ordering(730) 00:17:50.671 fused_ordering(731) 00:17:50.671 fused_ordering(732) 00:17:50.671 fused_ordering(733) 00:17:50.671 fused_ordering(734) 00:17:50.671 fused_ordering(735) 00:17:50.671 fused_ordering(736) 00:17:50.671 fused_ordering(737) 00:17:50.671 fused_ordering(738) 00:17:50.671 fused_ordering(739) 00:17:50.671 fused_ordering(740) 00:17:50.671 fused_ordering(741) 00:17:50.671 fused_ordering(742) 00:17:50.671 fused_ordering(743) 00:17:50.671 fused_ordering(744) 00:17:50.671 fused_ordering(745) 00:17:50.671 fused_ordering(746) 00:17:50.671 fused_ordering(747) 00:17:50.671 fused_ordering(748) 00:17:50.671 fused_ordering(749) 00:17:50.671 fused_ordering(750) 00:17:50.671 fused_ordering(751) 00:17:50.671 fused_ordering(752) 00:17:50.671 fused_ordering(753) 00:17:50.671 fused_ordering(754) 00:17:50.671 fused_ordering(755) 00:17:50.671 fused_ordering(756) 00:17:50.671 fused_ordering(757) 00:17:50.671 fused_ordering(758) 00:17:50.671 fused_ordering(759) 00:17:50.671 fused_ordering(760) 00:17:50.671 fused_ordering(761) 00:17:50.671 fused_ordering(762) 00:17:50.671 fused_ordering(763) 00:17:50.671 fused_ordering(764) 00:17:50.671 fused_ordering(765) 00:17:50.671 fused_ordering(766) 00:17:50.671 fused_ordering(767) 00:17:50.671 fused_ordering(768) 00:17:50.671 fused_ordering(769) 00:17:50.671 fused_ordering(770) 00:17:50.671 fused_ordering(771) 00:17:50.671 fused_ordering(772) 00:17:50.671 fused_ordering(773) 00:17:50.671 fused_ordering(774) 00:17:50.671 fused_ordering(775) 00:17:50.671 fused_ordering(776) 00:17:50.671 fused_ordering(777) 00:17:50.671 fused_ordering(778) 00:17:50.671 fused_ordering(779) 00:17:50.671 fused_ordering(780) 00:17:50.671 fused_ordering(781) 00:17:50.671 fused_ordering(782) 00:17:50.671 fused_ordering(783) 00:17:50.671 fused_ordering(784) 00:17:50.671 fused_ordering(785) 00:17:50.671 fused_ordering(786) 00:17:50.671 fused_ordering(787) 00:17:50.671 fused_ordering(788) 00:17:50.671 fused_ordering(789) 00:17:50.671 fused_ordering(790) 00:17:50.671 fused_ordering(791) 00:17:50.671 fused_ordering(792) 00:17:50.671 fused_ordering(793) 00:17:50.671 fused_ordering(794) 00:17:50.671 fused_ordering(795) 00:17:50.671 fused_ordering(796) 00:17:50.671 fused_ordering(797) 00:17:50.671 fused_ordering(798) 00:17:50.671 fused_ordering(799) 00:17:50.671 fused_ordering(800) 00:17:50.671 fused_ordering(801) 00:17:50.671 fused_ordering(802) 00:17:50.671 fused_ordering(803) 00:17:50.671 fused_ordering(804) 00:17:50.671 fused_ordering(805) 00:17:50.671 fused_ordering(806) 00:17:50.671 fused_ordering(807) 00:17:50.671 fused_ordering(808) 00:17:50.671 fused_ordering(809) 00:17:50.671 fused_ordering(810) 00:17:50.671 fused_ordering(811) 00:17:50.671 fused_ordering(812) 00:17:50.671 fused_ordering(813) 00:17:50.671 fused_ordering(814) 00:17:50.671 fused_ordering(815) 00:17:50.671 fused_ordering(816) 00:17:50.671 fused_ordering(817) 00:17:50.671 fused_ordering(818) 00:17:50.671 fused_ordering(819) 00:17:50.671 fused_ordering(820) 00:17:51.240 fused_ordering(821) 00:17:51.240 fused_ordering(822) 00:17:51.240 fused_ordering(823) 00:17:51.240 fused_ordering(824) 00:17:51.240 fused_ordering(825) 00:17:51.241 fused_ordering(826) 00:17:51.241 fused_ordering(827) 00:17:51.241 fused_ordering(828) 00:17:51.241 fused_ordering(829) 00:17:51.241 fused_ordering(830) 00:17:51.241 fused_ordering(831) 00:17:51.241 fused_ordering(832) 00:17:51.241 fused_ordering(833) 00:17:51.241 fused_ordering(834) 00:17:51.241 fused_ordering(835) 00:17:51.241 fused_ordering(836) 00:17:51.241 fused_ordering(837) 00:17:51.241 fused_ordering(838) 00:17:51.241 fused_ordering(839) 00:17:51.241 fused_ordering(840) 00:17:51.241 fused_ordering(841) 00:17:51.241 fused_ordering(842) 00:17:51.241 fused_ordering(843) 00:17:51.241 fused_ordering(844) 00:17:51.241 fused_ordering(845) 00:17:51.241 fused_ordering(846) 00:17:51.241 fused_ordering(847) 00:17:51.241 fused_ordering(848) 00:17:51.241 fused_ordering(849) 00:17:51.241 fused_ordering(850) 00:17:51.241 fused_ordering(851) 00:17:51.241 fused_ordering(852) 00:17:51.241 fused_ordering(853) 00:17:51.241 fused_ordering(854) 00:17:51.241 fused_ordering(855) 00:17:51.241 fused_ordering(856) 00:17:51.241 fused_ordering(857) 00:17:51.241 fused_ordering(858) 00:17:51.241 fused_ordering(859) 00:17:51.241 fused_ordering(860) 00:17:51.241 fused_ordering(861) 00:17:51.241 fused_ordering(862) 00:17:51.241 fused_ordering(863) 00:17:51.241 fused_ordering(864) 00:17:51.241 fused_ordering(865) 00:17:51.241 fused_ordering(866) 00:17:51.241 fused_ordering(867) 00:17:51.241 fused_ordering(868) 00:17:51.241 fused_ordering(869) 00:17:51.241 fused_ordering(870) 00:17:51.241 fused_ordering(871) 00:17:51.241 fused_ordering(872) 00:17:51.241 fused_ordering(873) 00:17:51.241 fused_ordering(874) 00:17:51.241 fused_ordering(875) 00:17:51.241 fused_ordering(876) 00:17:51.241 fused_ordering(877) 00:17:51.241 fused_ordering(878) 00:17:51.241 fused_ordering(879) 00:17:51.241 fused_ordering(880) 00:17:51.241 fused_ordering(881) 00:17:51.241 fused_ordering(882) 00:17:51.241 fused_ordering(883) 00:17:51.241 fused_ordering(884) 00:17:51.241 fused_ordering(885) 00:17:51.241 fused_ordering(886) 00:17:51.241 fused_ordering(887) 00:17:51.241 fused_ordering(888) 00:17:51.241 fused_ordering(889) 00:17:51.241 fused_ordering(890) 00:17:51.241 fused_ordering(891) 00:17:51.241 fused_ordering(892) 00:17:51.241 fused_ordering(893) 00:17:51.241 fused_ordering(894) 00:17:51.241 fused_ordering(895) 00:17:51.241 fused_ordering(896) 00:17:51.241 fused_ordering(897) 00:17:51.241 fused_ordering(898) 00:17:51.241 fused_ordering(899) 00:17:51.241 fused_ordering(900) 00:17:51.241 fused_ordering(901) 00:17:51.241 fused_ordering(902) 00:17:51.241 fused_ordering(903) 00:17:51.241 fused_ordering(904) 00:17:51.241 fused_ordering(905) 00:17:51.241 fused_ordering(906) 00:17:51.241 fused_ordering(907) 00:17:51.241 fused_ordering(908) 00:17:51.241 fused_ordering(909) 00:17:51.241 fused_ordering(910) 00:17:51.241 fused_ordering(911) 00:17:51.241 fused_ordering(912) 00:17:51.241 fused_ordering(913) 00:17:51.241 fused_ordering(914) 00:17:51.241 fused_ordering(915) 00:17:51.241 fused_ordering(916) 00:17:51.241 fused_ordering(917) 00:17:51.241 fused_ordering(918) 00:17:51.241 fused_ordering(919) 00:17:51.241 fused_ordering(920) 00:17:51.241 fused_ordering(921) 00:17:51.241 fused_ordering(922) 00:17:51.241 fused_ordering(923) 00:17:51.241 fused_ordering(924) 00:17:51.241 fused_ordering(925) 00:17:51.241 fused_ordering(926) 00:17:51.241 fused_ordering(927) 00:17:51.241 fused_ordering(928) 00:17:51.241 fused_ordering(929) 00:17:51.241 fused_ordering(930) 00:17:51.241 fused_ordering(931) 00:17:51.241 fused_ordering(932) 00:17:51.241 fused_ordering(933) 00:17:51.241 fused_ordering(934) 00:17:51.241 fused_ordering(935) 00:17:51.241 fused_ordering(936) 00:17:51.241 fused_ordering(937) 00:17:51.241 fused_ordering(938) 00:17:51.241 fused_ordering(939) 00:17:51.241 fused_ordering(940) 00:17:51.241 fused_ordering(941) 00:17:51.241 fused_ordering(942) 00:17:51.241 fused_ordering(943) 00:17:51.241 fused_ordering(944) 00:17:51.241 fused_ordering(945) 00:17:51.241 fused_ordering(946) 00:17:51.241 fused_ordering(947) 00:17:51.241 fused_ordering(948) 00:17:51.241 fused_ordering(949) 00:17:51.241 fused_ordering(950) 00:17:51.241 fused_ordering(951) 00:17:51.241 fused_ordering(952) 00:17:51.241 fused_ordering(953) 00:17:51.241 fused_ordering(954) 00:17:51.241 fused_ordering(955) 00:17:51.241 fused_ordering(956) 00:17:51.241 fused_ordering(957) 00:17:51.241 fused_ordering(958) 00:17:51.241 fused_ordering(959) 00:17:51.241 fused_ordering(960) 00:17:51.241 fused_ordering(961) 00:17:51.241 fused_ordering(962) 00:17:51.241 fused_ordering(963) 00:17:51.241 fused_ordering(964) 00:17:51.241 fused_ordering(965) 00:17:51.241 fused_ordering(966) 00:17:51.241 fused_ordering(967) 00:17:51.241 fused_ordering(968) 00:17:51.241 fused_ordering(969) 00:17:51.241 fused_ordering(970) 00:17:51.241 fused_ordering(971) 00:17:51.241 fused_ordering(972) 00:17:51.241 fused_ordering(973) 00:17:51.241 fused_ordering(974) 00:17:51.241 fused_ordering(975) 00:17:51.241 fused_ordering(976) 00:17:51.241 fused_ordering(977) 00:17:51.241 fused_ordering(978) 00:17:51.241 fused_ordering(979) 00:17:51.241 fused_ordering(980) 00:17:51.241 fused_ordering(981) 00:17:51.241 fused_ordering(982) 00:17:51.241 fused_ordering(983) 00:17:51.241 fused_ordering(984) 00:17:51.241 fused_ordering(985) 00:17:51.241 fused_ordering(986) 00:17:51.241 fused_ordering(987) 00:17:51.241 fused_ordering(988) 00:17:51.241 fused_ordering(989) 00:17:51.241 fused_ordering(990) 00:17:51.241 fused_ordering(991) 00:17:51.241 fused_ordering(992) 00:17:51.241 fused_ordering(993) 00:17:51.241 fused_ordering(994) 00:17:51.241 fused_ordering(995) 00:17:51.241 fused_ordering(996) 00:17:51.241 fused_ordering(997) 00:17:51.241 fused_ordering(998) 00:17:51.241 fused_ordering(999) 00:17:51.241 fused_ordering(1000) 00:17:51.241 fused_ordering(1001) 00:17:51.241 fused_ordering(1002) 00:17:51.241 fused_ordering(1003) 00:17:51.241 fused_ordering(1004) 00:17:51.241 fused_ordering(1005) 00:17:51.241 fused_ordering(1006) 00:17:51.241 fused_ordering(1007) 00:17:51.241 fused_ordering(1008) 00:17:51.241 fused_ordering(1009) 00:17:51.241 fused_ordering(1010) 00:17:51.241 fused_ordering(1011) 00:17:51.241 fused_ordering(1012) 00:17:51.241 fused_ordering(1013) 00:17:51.241 fused_ordering(1014) 00:17:51.241 fused_ordering(1015) 00:17:51.241 fused_ordering(1016) 00:17:51.241 fused_ordering(1017) 00:17:51.241 fused_ordering(1018) 00:17:51.241 fused_ordering(1019) 00:17:51.241 fused_ordering(1020) 00:17:51.241 fused_ordering(1021) 00:17:51.241 fused_ordering(1022) 00:17:51.241 fused_ordering(1023) 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.241 rmmod nvme_tcp 00:17:51.241 rmmod nvme_fabrics 00:17:51.241 rmmod nvme_keyring 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 284578 ']' 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 284578 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 284578 ']' 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 284578 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284578 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284578' 00:17:51.241 killing process with pid 284578 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 284578 00:17:51.241 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 284578 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.502 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.410 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.410 00:17:53.410 real 0m10.545s 00:17:53.410 user 0m5.083s 00:17:53.410 sys 0m5.530s 00:17:53.410 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.410 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:53.410 ************************************ 00:17:53.410 END TEST nvmf_fused_ordering 00:17:53.410 ************************************ 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.670 ************************************ 00:17:53.670 START TEST nvmf_ns_masking 00:17:53.670 ************************************ 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:53.670 * Looking for test storage... 00:17:53.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.670 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:53.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.670 --rc genhtml_branch_coverage=1 00:17:53.670 --rc genhtml_function_coverage=1 00:17:53.671 --rc genhtml_legend=1 00:17:53.671 --rc geninfo_all_blocks=1 00:17:53.671 --rc geninfo_unexecuted_blocks=1 00:17:53.671 00:17:53.671 ' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:53.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.671 --rc genhtml_branch_coverage=1 00:17:53.671 --rc genhtml_function_coverage=1 00:17:53.671 --rc genhtml_legend=1 00:17:53.671 --rc geninfo_all_blocks=1 00:17:53.671 --rc geninfo_unexecuted_blocks=1 00:17:53.671 00:17:53.671 ' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:53.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.671 --rc genhtml_branch_coverage=1 00:17:53.671 --rc genhtml_function_coverage=1 00:17:53.671 --rc genhtml_legend=1 00:17:53.671 --rc geninfo_all_blocks=1 00:17:53.671 --rc geninfo_unexecuted_blocks=1 00:17:53.671 00:17:53.671 ' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:53.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.671 --rc genhtml_branch_coverage=1 00:17:53.671 --rc genhtml_function_coverage=1 00:17:53.671 --rc genhtml_legend=1 00:17:53.671 --rc geninfo_all_blocks=1 00:17:53.671 --rc geninfo_unexecuted_blocks=1 00:17:53.671 00:17:53.671 ' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:53.671 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=57e322b1-1a55-428e-bcb5-7f1c3e8d57e7 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1100fa39-91ad-4f6b-918f-a56d7074bb5f 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f9b181ed-1ea9-4af3-bd0b-0e1b40eafbfe 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.932 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.505 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.505 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:00.506 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:00.506 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:00.506 Found net devices under 0000:af:00.0: cvl_0_0 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:00.506 Found net devices under 0000:af:00.1: cvl_0_1 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:00.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:18:00.506 00:18:00.506 --- 10.0.0.2 ping statistics --- 00:18:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.506 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:00.506 00:18:00.506 --- 10.0.0.1 ping statistics --- 00:18:00.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.506 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=288501 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 288501 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 288501 ']' 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.506 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:00.506 [2024-12-16 22:23:49.347326] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:00.506 [2024-12-16 22:23:49.347373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.506 [2024-12-16 22:23:49.424452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.507 [2024-12-16 22:23:49.445780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.507 [2024-12-16 22:23:49.445815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.507 [2024-12-16 22:23:49.445822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.507 [2024-12-16 22:23:49.445829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.507 [2024-12-16 22:23:49.445834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.507 [2024-12-16 22:23:49.446325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:00.507 [2024-12-16 22:23:49.740839] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:00.507 Malloc1 00:18:00.507 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:00.507 Malloc2 00:18:00.507 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:00.766 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:01.025 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.284 [2024-12-16 22:23:50.751356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f9b181ed-1ea9-4af3-bd0b-0e1b40eafbfe -a 10.0.0.2 -s 4420 -i 4 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:01.284 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:03.821 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:03.821 [ 0]:0x1 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37b937a0317f46c6aebb73c05f277618 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37b937a0317f46c6aebb73c05f277618 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:03.821 [ 0]:0x1 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37b937a0317f46c6aebb73c05f277618 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37b937a0317f46c6aebb73c05f277618 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:03.821 [ 1]:0x2 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:03.821 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.081 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.081 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:04.340 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:04.340 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f9b181ed-1ea9-4af3-bd0b-0e1b40eafbfe -a 10.0.0.2 -s 4420 -i 4 00:18:04.340 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:04.340 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:04.340 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.340 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:04.340 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:04.340 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:06.876 [ 0]:0x2 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:06.876 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:06.877 [ 0]:0x1 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37b937a0317f46c6aebb73c05f277618 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37b937a0317f46c6aebb73c05f277618 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:06.877 [ 1]:0x2 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.877 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.136 [ 0]:0x2 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:07.136 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:07.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.395 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:07.395 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:07.395 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f9b181ed-1ea9-4af3-bd0b-0e1b40eafbfe -a 10.0.0.2 -s 4420 -i 4 00:18:07.654 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:07.654 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:07.654 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.654 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:07.654 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:07.654 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:09.559 [ 0]:0x1 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:09.559 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37b937a0317f46c6aebb73c05f277618 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37b937a0317f46c6aebb73c05f277618 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.818 [ 1]:0x2 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.818 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:10.078 [ 0]:0x2 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:10.078 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:10.337 [2024-12-16 22:23:59.812966] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:10.337 request: 00:18:10.337 { 00:18:10.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.337 "nsid": 2, 00:18:10.337 "host": "nqn.2016-06.io.spdk:host1", 00:18:10.337 "method": "nvmf_ns_remove_host", 00:18:10.337 "req_id": 1 00:18:10.337 } 00:18:10.337 Got JSON-RPC error response 00:18:10.337 response: 00:18:10.337 { 00:18:10.337 "code": -32602, 00:18:10.337 "message": "Invalid parameters" 00:18:10.337 } 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:10.337 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:10.338 [ 0]:0x2 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:10.338 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:10.338 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=426cfd8897a34f5bba16243ac9e6d5ba 00:18:10.338 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 426cfd8897a34f5bba16243ac9e6d5ba != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:10.338 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:10.338 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:10.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=290259 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 290259 /var/tmp/host.sock 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 290259 ']' 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:10.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.597 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.597 [2024-12-16 22:24:00.171019] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:10.597 [2024-12-16 22:24:00.171068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290259 ] 00:18:10.597 [2024-12-16 22:24:00.247389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.597 [2024-12-16 22:24:00.270175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.857 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.857 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:10.857 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:11.116 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:11.375 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 57e322b1-1a55-428e-bcb5-7f1c3e8d57e7 00:18:11.375 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:11.375 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 57E322B11A55428EBCB57F1C3E8D57E7 -i 00:18:11.635 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1100fa39-91ad-4f6b-918f-a56d7074bb5f 00:18:11.635 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:11.635 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1100FA3991AD4F6B918FA56D7074BB5F -i 00:18:11.635 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.894 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:12.153 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:12.153 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:12.722 nvme0n1 00:18:12.722 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:12.722 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:12.981 nvme1n2 00:18:12.981 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:12.981 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:12.981 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:12.981 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:12.981 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:13.240 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:13.240 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:13.240 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:13.240 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:13.499 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 57e322b1-1a55-428e-bcb5-7f1c3e8d57e7 == \5\7\e\3\2\2\b\1\-\1\a\5\5\-\4\2\8\e\-\b\c\b\5\-\7\f\1\c\3\e\8\d\5\7\e\7 ]] 00:18:13.499 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:13.499 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:13.499 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:13.758 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1100fa39-91ad-4f6b-918f-a56d7074bb5f == \1\1\0\0\f\a\3\9\-\9\1\a\d\-\4\f\6\b\-\9\1\8\f\-\a\5\6\d\7\0\7\4\b\b\5\f ]] 00:18:13.758 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.758 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 57e322b1-1a55-428e-bcb5-7f1c3e8d57e7 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 57E322B11A55428EBCB57F1C3E8D57E7 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 57E322B11A55428EBCB57F1C3E8D57E7 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:14.018 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 57E322B11A55428EBCB57F1C3E8D57E7 00:18:14.277 [2024-12-16 22:24:03.812338] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:14.277 [2024-12-16 22:24:03.812369] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:14.278 [2024-12-16 22:24:03.812378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.278 request: 00:18:14.278 { 00:18:14.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.278 "namespace": { 00:18:14.278 "bdev_name": "invalid", 00:18:14.278 "nsid": 1, 00:18:14.278 "nguid": "57E322B11A55428EBCB57F1C3E8D57E7", 00:18:14.278 "no_auto_visible": false, 00:18:14.278 "hide_metadata": false 00:18:14.278 }, 00:18:14.278 "method": "nvmf_subsystem_add_ns", 00:18:14.278 "req_id": 1 00:18:14.278 } 00:18:14.278 Got JSON-RPC error response 00:18:14.278 response: 00:18:14.278 { 00:18:14.278 "code": -32602, 00:18:14.278 "message": "Invalid parameters" 00:18:14.278 } 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 57e322b1-1a55-428e-bcb5-7f1c3e8d57e7 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:14.278 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 57E322B11A55428EBCB57F1C3E8D57E7 -i 00:18:14.537 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:16.527 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:16.527 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:16.527 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 290259 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 290259 ']' 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 290259 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290259 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290259' 00:18:16.788 killing process with pid 290259 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 290259 00:18:16.788 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 290259 00:18:17.051 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:17.312 rmmod nvme_tcp 00:18:17.312 rmmod nvme_fabrics 00:18:17.312 rmmod nvme_keyring 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:17.312 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 288501 ']' 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 288501 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 288501 ']' 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 288501 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288501 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288501' 00:18:17.313 killing process with pid 288501 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 288501 00:18:17.313 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 288501 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.572 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:20.112 00:18:20.112 real 0m26.049s 00:18:20.112 user 0m31.389s 00:18:20.112 sys 0m7.094s 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:20.112 ************************************ 00:18:20.112 END TEST nvmf_ns_masking 00:18:20.112 ************************************ 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:20.112 ************************************ 00:18:20.112 START TEST nvmf_nvme_cli 00:18:20.112 ************************************ 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:20.112 * Looking for test storage... 00:18:20.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:20.112 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:20.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.113 --rc genhtml_branch_coverage=1 00:18:20.113 --rc genhtml_function_coverage=1 00:18:20.113 --rc genhtml_legend=1 00:18:20.113 --rc geninfo_all_blocks=1 00:18:20.113 --rc geninfo_unexecuted_blocks=1 00:18:20.113 00:18:20.113 ' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:20.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.113 --rc genhtml_branch_coverage=1 00:18:20.113 --rc genhtml_function_coverage=1 00:18:20.113 --rc genhtml_legend=1 00:18:20.113 --rc geninfo_all_blocks=1 00:18:20.113 --rc geninfo_unexecuted_blocks=1 00:18:20.113 00:18:20.113 ' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:20.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.113 --rc genhtml_branch_coverage=1 00:18:20.113 --rc genhtml_function_coverage=1 00:18:20.113 --rc genhtml_legend=1 00:18:20.113 --rc geninfo_all_blocks=1 00:18:20.113 --rc geninfo_unexecuted_blocks=1 00:18:20.113 00:18:20.113 ' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:20.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.113 --rc genhtml_branch_coverage=1 00:18:20.113 --rc genhtml_function_coverage=1 00:18:20.113 --rc genhtml_legend=1 00:18:20.113 --rc geninfo_all_blocks=1 00:18:20.113 --rc geninfo_unexecuted_blocks=1 00:18:20.113 00:18:20.113 ' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:20.113 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:25.416 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:25.675 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:25.675 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:25.675 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:25.676 Found net devices under 0000:af:00.0: cvl_0_0 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:25.676 Found net devices under 0000:af:00.1: cvl_0_1 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:25.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:18:25.676 00:18:25.676 --- 10.0.0.2 ping statistics --- 00:18:25.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.676 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:18:25.676 00:18:25.676 --- 10.0.0.1 ping statistics --- 00:18:25.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.676 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.676 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=295446 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 295446 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 295446 ']' 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.935 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.935 [2024-12-16 22:24:15.437806] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:25.935 [2024-12-16 22:24:15.437850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.935 [2024-12-16 22:24:15.513035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.935 [2024-12-16 22:24:15.536814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.935 [2024-12-16 22:24:15.536855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.935 [2024-12-16 22:24:15.536863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.935 [2024-12-16 22:24:15.536869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.935 [2024-12-16 22:24:15.536874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.935 [2024-12-16 22:24:15.538354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.935 [2024-12-16 22:24:15.538462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.935 [2024-12-16 22:24:15.538546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.935 [2024-12-16 22:24:15.538547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.194 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 [2024-12-16 22:24:15.678314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 Malloc0 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 Malloc1 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 [2024-12-16 22:24:15.765758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.195 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:26.454 00:18:26.454 Discovery Log Number of Records 2, Generation counter 2 00:18:26.454 =====Discovery Log Entry 0====== 00:18:26.454 trtype: tcp 00:18:26.454 adrfam: ipv4 00:18:26.454 subtype: current discovery subsystem 00:18:26.454 treq: not required 00:18:26.454 portid: 0 00:18:26.454 trsvcid: 4420 00:18:26.454 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:26.454 traddr: 10.0.0.2 00:18:26.454 eflags: explicit discovery connections, duplicate discovery information 00:18:26.454 sectype: none 00:18:26.454 =====Discovery Log Entry 1====== 00:18:26.454 trtype: tcp 00:18:26.454 adrfam: ipv4 00:18:26.454 subtype: nvme subsystem 00:18:26.454 treq: not required 00:18:26.454 portid: 0 00:18:26.454 trsvcid: 4420 00:18:26.454 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:26.454 traddr: 10.0.0.2 00:18:26.454 eflags: none 00:18:26.454 sectype: none 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:26.454 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:27.389 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:27.389 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.389 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.389 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:27.389 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:27.389 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:29.928 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:29.929 /dev/nvme0n2 ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:29.929 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:30.191 rmmod nvme_tcp 00:18:30.191 rmmod nvme_fabrics 00:18:30.191 rmmod nvme_keyring 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 295446 ']' 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 295446 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 295446 ']' 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 295446 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295446 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295446' 00:18:30.191 killing process with pid 295446 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 295446 00:18:30.191 22:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 295446 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.454 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:33.014 00:18:33.014 real 0m12.827s 00:18:33.014 user 0m19.473s 00:18:33.014 sys 0m5.138s 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.014 ************************************ 00:18:33.014 END TEST nvmf_nvme_cli 00:18:33.014 ************************************ 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.014 ************************************ 00:18:33.014 START TEST nvmf_vfio_user 00:18:33.014 ************************************ 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:33.014 * Looking for test storage... 00:18:33.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:33.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.014 --rc genhtml_branch_coverage=1 00:18:33.014 --rc genhtml_function_coverage=1 00:18:33.014 --rc genhtml_legend=1 00:18:33.014 --rc geninfo_all_blocks=1 00:18:33.014 --rc geninfo_unexecuted_blocks=1 00:18:33.014 00:18:33.014 ' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:33.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.014 --rc genhtml_branch_coverage=1 00:18:33.014 --rc genhtml_function_coverage=1 00:18:33.014 --rc genhtml_legend=1 00:18:33.014 --rc geninfo_all_blocks=1 00:18:33.014 --rc geninfo_unexecuted_blocks=1 00:18:33.014 00:18:33.014 ' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:33.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.014 --rc genhtml_branch_coverage=1 00:18:33.014 --rc genhtml_function_coverage=1 00:18:33.014 --rc genhtml_legend=1 00:18:33.014 --rc geninfo_all_blocks=1 00:18:33.014 --rc geninfo_unexecuted_blocks=1 00:18:33.014 00:18:33.014 ' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:33.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.014 --rc genhtml_branch_coverage=1 00:18:33.014 --rc genhtml_function_coverage=1 00:18:33.014 --rc genhtml_legend=1 00:18:33.014 --rc geninfo_all_blocks=1 00:18:33.014 --rc geninfo_unexecuted_blocks=1 00:18:33.014 00:18:33.014 ' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.014 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296764 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296764' 00:18:33.015 Process pid: 296764 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296764 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296764 ']' 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:33.015 [2024-12-16 22:24:22.446236] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:33.015 [2024-12-16 22:24:22.446286] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.015 [2024-12-16 22:24:22.519571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.015 [2024-12-16 22:24:22.542555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.015 [2024-12-16 22:24:22.542592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.015 [2024-12-16 22:24:22.542599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.015 [2024-12-16 22:24:22.542606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.015 [2024-12-16 22:24:22.542611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.015 [2024-12-16 22:24:22.544049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.015 [2024-12-16 22:24:22.544158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.015 [2024-12-16 22:24:22.544267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.015 [2024-12-16 22:24:22.544267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:33.015 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:33.951 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:34.220 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:34.220 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:34.220 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:34.220 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:34.220 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:34.508 Malloc1 00:18:34.508 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:34.798 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:35.084 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:35.084 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.084 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:35.084 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:35.358 Malloc2 00:18:35.358 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:35.618 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:35.618 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:35.878 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:35.878 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:35.878 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:35.878 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:35.878 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:35.878 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:35.878 [2024-12-16 22:24:25.548766] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:35.878 [2024-12-16 22:24:25.548815] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297367 ] 00:18:36.143 [2024-12-16 22:24:25.587507] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:36.143 [2024-12-16 22:24:25.592932] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:36.143 [2024-12-16 22:24:25.592953] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc0cbf81000 00:18:36.143 [2024-12-16 22:24:25.593927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.594930] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.595940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.596943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.597951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.598948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.599956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.600974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:36.143 [2024-12-16 22:24:25.601969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:36.143 [2024-12-16 22:24:25.601983] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc0cac8a000 00:18:36.143 [2024-12-16 22:24:25.602888] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:36.143 [2024-12-16 22:24:25.612277] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:36.143 [2024-12-16 22:24:25.612306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:36.143 [2024-12-16 22:24:25.618065] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:36.143 [2024-12-16 22:24:25.618099] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:36.143 [2024-12-16 22:24:25.618175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:36.143 [2024-12-16 22:24:25.618195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:36.143 [2024-12-16 22:24:25.618201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:36.143 [2024-12-16 22:24:25.619059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:36.143 [2024-12-16 22:24:25.619068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:36.143 [2024-12-16 22:24:25.619075] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:36.143 [2024-12-16 22:24:25.620064] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:36.143 [2024-12-16 22:24:25.620072] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:36.143 [2024-12-16 22:24:25.620078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:36.143 [2024-12-16 22:24:25.621069] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:36.143 [2024-12-16 22:24:25.621076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:36.143 [2024-12-16 22:24:25.622076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:36.143 [2024-12-16 22:24:25.622084] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:36.143 [2024-12-16 22:24:25.622088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:36.143 [2024-12-16 22:24:25.622094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:36.143 [2024-12-16 22:24:25.622201] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:36.143 [2024-12-16 22:24:25.622206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:36.143 [2024-12-16 22:24:25.622211] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:36.143 [2024-12-16 22:24:25.623082] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:36.143 [2024-12-16 22:24:25.624088] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:36.143 [2024-12-16 22:24:25.625093] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:36.143 [2024-12-16 22:24:25.626095] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:36.143 [2024-12-16 22:24:25.626182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:36.143 [2024-12-16 22:24:25.627118] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:36.143 [2024-12-16 22:24:25.627125] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:36.143 [2024-12-16 22:24:25.627130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:36.144 [2024-12-16 22:24:25.627153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627165] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:36.144 [2024-12-16 22:24:25.627169] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.144 [2024-12-16 22:24:25.627173] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.144 [2024-12-16 22:24:25.627186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627251] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:36.144 [2024-12-16 22:24:25.627255] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:36.144 [2024-12-16 22:24:25.627259] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:36.144 [2024-12-16 22:24:25.627263] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:36.144 [2024-12-16 22:24:25.627267] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:36.144 [2024-12-16 22:24:25.627271] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:36.144 [2024-12-16 22:24:25.627275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.144 [2024-12-16 22:24:25.627327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.144 [2024-12-16 22:24:25.627334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.144 [2024-12-16 22:24:25.627341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:36.144 [2024-12-16 22:24:25.627345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627378] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:36.144 [2024-12-16 22:24:25.627383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627477] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:36.144 [2024-12-16 22:24:25.627482] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:36.144 [2024-12-16 22:24:25.627485] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.144 [2024-12-16 22:24:25.627490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627510] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:36.144 [2024-12-16 22:24:25.627517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627530] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:36.144 [2024-12-16 22:24:25.627534] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.144 [2024-12-16 22:24:25.627538] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.144 [2024-12-16 22:24:25.627544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627592] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:36.144 [2024-12-16 22:24:25.627596] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.144 [2024-12-16 22:24:25.627599] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.144 [2024-12-16 22:24:25.627604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627654] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:36.144 [2024-12-16 22:24:25.627658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:36.144 [2024-12-16 22:24:25.627663] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:36.144 [2024-12-16 22:24:25.627680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:36.144 [2024-12-16 22:24:25.627753] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:36.144 [2024-12-16 22:24:25.627757] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:36.144 [2024-12-16 22:24:25.627760] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:36.144 [2024-12-16 22:24:25.627763] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:36.144 [2024-12-16 22:24:25.627766] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:36.144 [2024-12-16 22:24:25.627772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:36.144 [2024-12-16 22:24:25.627778] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:36.144 [2024-12-16 22:24:25.627782] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:36.144 [2024-12-16 22:24:25.627784] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.144 [2024-12-16 22:24:25.627790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:36.144 [2024-12-16 22:24:25.627796] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:36.144 [2024-12-16 22:24:25.627799] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:36.144 [2024-12-16 22:24:25.627803] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.145 [2024-12-16 22:24:25.627808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:36.145 [2024-12-16 22:24:25.627814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:36.145 [2024-12-16 22:24:25.627818] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:36.145 [2024-12-16 22:24:25.627821] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:36.145 [2024-12-16 22:24:25.627826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:36.145 [2024-12-16 22:24:25.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:36.145 [2024-12-16 22:24:25.627844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:36.145 [2024-12-16 22:24:25.627853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:36.145 [2024-12-16 22:24:25.627859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:36.145 ===================================================== 00:18:36.145 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:36.145 ===================================================== 00:18:36.145 Controller Capabilities/Features 00:18:36.145 ================================ 00:18:36.145 Vendor ID: 4e58 00:18:36.145 Subsystem Vendor ID: 4e58 00:18:36.145 Serial Number: SPDK1 00:18:36.145 Model Number: SPDK bdev Controller 00:18:36.145 Firmware Version: 25.01 00:18:36.145 Recommended Arb Burst: 6 00:18:36.145 IEEE OUI Identifier: 8d 6b 50 00:18:36.145 Multi-path I/O 00:18:36.145 May have multiple subsystem ports: Yes 00:18:36.145 May have multiple controllers: Yes 00:18:36.145 Associated with SR-IOV VF: No 00:18:36.145 Max Data Transfer Size: 131072 00:18:36.145 Max Number of Namespaces: 32 00:18:36.145 Max Number of I/O Queues: 127 00:18:36.145 NVMe Specification Version (VS): 1.3 00:18:36.145 NVMe Specification Version (Identify): 1.3 00:18:36.145 Maximum Queue Entries: 256 00:18:36.145 Contiguous Queues Required: Yes 00:18:36.145 Arbitration Mechanisms Supported 00:18:36.145 Weighted Round Robin: Not Supported 00:18:36.145 Vendor Specific: Not Supported 00:18:36.145 Reset Timeout: 15000 ms 00:18:36.145 Doorbell Stride: 4 bytes 00:18:36.145 NVM Subsystem Reset: Not Supported 00:18:36.145 Command Sets Supported 00:18:36.145 NVM Command Set: Supported 00:18:36.145 Boot Partition: Not Supported 00:18:36.145 Memory Page Size Minimum: 4096 bytes 00:18:36.145 Memory Page Size Maximum: 4096 bytes 00:18:36.145 Persistent Memory Region: Not Supported 00:18:36.145 Optional Asynchronous Events Supported 00:18:36.145 Namespace Attribute Notices: Supported 00:18:36.145 Firmware Activation Notices: Not Supported 00:18:36.145 ANA Change Notices: Not Supported 00:18:36.145 PLE Aggregate Log Change Notices: Not Supported 00:18:36.145 LBA Status Info Alert Notices: Not Supported 00:18:36.145 EGE Aggregate Log Change Notices: Not Supported 00:18:36.145 Normal NVM Subsystem Shutdown event: Not Supported 00:18:36.145 Zone Descriptor Change Notices: Not Supported 00:18:36.145 Discovery Log Change Notices: Not Supported 00:18:36.145 Controller Attributes 00:18:36.145 128-bit Host Identifier: Supported 00:18:36.145 Non-Operational Permissive Mode: Not Supported 00:18:36.145 NVM Sets: Not Supported 00:18:36.145 Read Recovery Levels: Not Supported 00:18:36.145 Endurance Groups: Not Supported 00:18:36.145 Predictable Latency Mode: Not Supported 00:18:36.145 Traffic Based Keep ALive: Not Supported 00:18:36.145 Namespace Granularity: Not Supported 00:18:36.145 SQ Associations: Not Supported 00:18:36.145 UUID List: Not Supported 00:18:36.145 Multi-Domain Subsystem: Not Supported 00:18:36.145 Fixed Capacity Management: Not Supported 00:18:36.145 Variable Capacity Management: Not Supported 00:18:36.145 Delete Endurance Group: Not Supported 00:18:36.145 Delete NVM Set: Not Supported 00:18:36.145 Extended LBA Formats Supported: Not Supported 00:18:36.145 Flexible Data Placement Supported: Not Supported 00:18:36.145 00:18:36.145 Controller Memory Buffer Support 00:18:36.145 ================================ 00:18:36.145 Supported: No 00:18:36.145 00:18:36.145 Persistent Memory Region Support 00:18:36.145 ================================ 00:18:36.145 Supported: No 00:18:36.145 00:18:36.145 Admin Command Set Attributes 00:18:36.145 ============================ 00:18:36.145 Security Send/Receive: Not Supported 00:18:36.145 Format NVM: Not Supported 00:18:36.145 Firmware Activate/Download: Not Supported 00:18:36.145 Namespace Management: Not Supported 00:18:36.145 Device Self-Test: Not Supported 00:18:36.145 Directives: Not Supported 00:18:36.145 NVMe-MI: Not Supported 00:18:36.145 Virtualization Management: Not Supported 00:18:36.145 Doorbell Buffer Config: Not Supported 00:18:36.145 Get LBA Status Capability: Not Supported 00:18:36.145 Command & Feature Lockdown Capability: Not Supported 00:18:36.145 Abort Command Limit: 4 00:18:36.145 Async Event Request Limit: 4 00:18:36.145 Number of Firmware Slots: N/A 00:18:36.145 Firmware Slot 1 Read-Only: N/A 00:18:36.145 Firmware Activation Without Reset: N/A 00:18:36.145 Multiple Update Detection Support: N/A 00:18:36.145 Firmware Update Granularity: No Information Provided 00:18:36.145 Per-Namespace SMART Log: No 00:18:36.145 Asymmetric Namespace Access Log Page: Not Supported 00:18:36.145 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:36.145 Command Effects Log Page: Supported 00:18:36.145 Get Log Page Extended Data: Supported 00:18:36.145 Telemetry Log Pages: Not Supported 00:18:36.145 Persistent Event Log Pages: Not Supported 00:18:36.145 Supported Log Pages Log Page: May Support 00:18:36.145 Commands Supported & Effects Log Page: Not Supported 00:18:36.145 Feature Identifiers & Effects Log Page:May Support 00:18:36.145 NVMe-MI Commands & Effects Log Page: May Support 00:18:36.145 Data Area 4 for Telemetry Log: Not Supported 00:18:36.145 Error Log Page Entries Supported: 128 00:18:36.145 Keep Alive: Supported 00:18:36.145 Keep Alive Granularity: 10000 ms 00:18:36.145 00:18:36.145 NVM Command Set Attributes 00:18:36.145 ========================== 00:18:36.145 Submission Queue Entry Size 00:18:36.145 Max: 64 00:18:36.145 Min: 64 00:18:36.145 Completion Queue Entry Size 00:18:36.145 Max: 16 00:18:36.145 Min: 16 00:18:36.145 Number of Namespaces: 32 00:18:36.145 Compare Command: Supported 00:18:36.145 Write Uncorrectable Command: Not Supported 00:18:36.145 Dataset Management Command: Supported 00:18:36.145 Write Zeroes Command: Supported 00:18:36.145 Set Features Save Field: Not Supported 00:18:36.145 Reservations: Not Supported 00:18:36.145 Timestamp: Not Supported 00:18:36.145 Copy: Supported 00:18:36.145 Volatile Write Cache: Present 00:18:36.145 Atomic Write Unit (Normal): 1 00:18:36.145 Atomic Write Unit (PFail): 1 00:18:36.145 Atomic Compare & Write Unit: 1 00:18:36.145 Fused Compare & Write: Supported 00:18:36.145 Scatter-Gather List 00:18:36.145 SGL Command Set: Supported (Dword aligned) 00:18:36.145 SGL Keyed: Not Supported 00:18:36.145 SGL Bit Bucket Descriptor: Not Supported 00:18:36.145 SGL Metadata Pointer: Not Supported 00:18:36.145 Oversized SGL: Not Supported 00:18:36.145 SGL Metadata Address: Not Supported 00:18:36.145 SGL Offset: Not Supported 00:18:36.145 Transport SGL Data Block: Not Supported 00:18:36.145 Replay Protected Memory Block: Not Supported 00:18:36.145 00:18:36.145 Firmware Slot Information 00:18:36.145 ========================= 00:18:36.145 Active slot: 1 00:18:36.145 Slot 1 Firmware Revision: 25.01 00:18:36.145 00:18:36.145 00:18:36.145 Commands Supported and Effects 00:18:36.145 ============================== 00:18:36.145 Admin Commands 00:18:36.145 -------------- 00:18:36.145 Get Log Page (02h): Supported 00:18:36.145 Identify (06h): Supported 00:18:36.145 Abort (08h): Supported 00:18:36.145 Set Features (09h): Supported 00:18:36.145 Get Features (0Ah): Supported 00:18:36.145 Asynchronous Event Request (0Ch): Supported 00:18:36.145 Keep Alive (18h): Supported 00:18:36.145 I/O Commands 00:18:36.145 ------------ 00:18:36.145 Flush (00h): Supported LBA-Change 00:18:36.145 Write (01h): Supported LBA-Change 00:18:36.145 Read (02h): Supported 00:18:36.145 Compare (05h): Supported 00:18:36.145 Write Zeroes (08h): Supported LBA-Change 00:18:36.145 Dataset Management (09h): Supported LBA-Change 00:18:36.145 Copy (19h): Supported LBA-Change 00:18:36.145 00:18:36.145 Error Log 00:18:36.145 ========= 00:18:36.145 00:18:36.145 Arbitration 00:18:36.145 =========== 00:18:36.145 Arbitration Burst: 1 00:18:36.145 00:18:36.145 Power Management 00:18:36.145 ================ 00:18:36.145 Number of Power States: 1 00:18:36.145 Current Power State: Power State #0 00:18:36.145 Power State #0: 00:18:36.145 Max Power: 0.00 W 00:18:36.145 Non-Operational State: Operational 00:18:36.145 Entry Latency: Not Reported 00:18:36.145 Exit Latency: Not Reported 00:18:36.145 Relative Read Throughput: 0 00:18:36.145 Relative Read Latency: 0 00:18:36.145 Relative Write Throughput: 0 00:18:36.145 Relative Write Latency: 0 00:18:36.145 Idle Power: Not Reported 00:18:36.145 Active Power: Not Reported 00:18:36.145 Non-Operational Permissive Mode: Not Supported 00:18:36.145 00:18:36.145 Health Information 00:18:36.146 ================== 00:18:36.146 Critical Warnings: 00:18:36.146 Available Spare Space: OK 00:18:36.146 Temperature: OK 00:18:36.146 Device Reliability: OK 00:18:36.146 Read Only: No 00:18:36.146 Volatile Memory Backup: OK 00:18:36.146 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:36.146 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:36.146 Available Spare: 0% 00:18:36.146 Available Sp[2024-12-16 22:24:25.627939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:36.146 [2024-12-16 22:24:25.627946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:36.146 [2024-12-16 22:24:25.627969] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:36.146 [2024-12-16 22:24:25.627978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.146 [2024-12-16 22:24:25.627983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.146 [2024-12-16 22:24:25.627991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.146 [2024-12-16 22:24:25.627996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:36.146 [2024-12-16 22:24:25.628121] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:36.146 [2024-12-16 22:24:25.628132] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:36.146 [2024-12-16 22:24:25.629125] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:36.146 [2024-12-16 22:24:25.631201] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:36.146 [2024-12-16 22:24:25.631209] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:36.146 [2024-12-16 22:24:25.632141] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:36.146 [2024-12-16 22:24:25.632150] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:36.146 [2024-12-16 22:24:25.632207] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:36.146 [2024-12-16 22:24:25.633168] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:36.146 are Threshold: 0% 00:18:36.146 Life Percentage Used: 0% 00:18:36.146 Data Units Read: 0 00:18:36.146 Data Units Written: 0 00:18:36.146 Host Read Commands: 0 00:18:36.146 Host Write Commands: 0 00:18:36.146 Controller Busy Time: 0 minutes 00:18:36.146 Power Cycles: 0 00:18:36.146 Power On Hours: 0 hours 00:18:36.146 Unsafe Shutdowns: 0 00:18:36.146 Unrecoverable Media Errors: 0 00:18:36.146 Lifetime Error Log Entries: 0 00:18:36.146 Warning Temperature Time: 0 minutes 00:18:36.146 Critical Temperature Time: 0 minutes 00:18:36.146 00:18:36.146 Number of Queues 00:18:36.146 ================ 00:18:36.146 Number of I/O Submission Queues: 127 00:18:36.146 Number of I/O Completion Queues: 127 00:18:36.146 00:18:36.146 Active Namespaces 00:18:36.146 ================= 00:18:36.146 Namespace ID:1 00:18:36.146 Error Recovery Timeout: Unlimited 00:18:36.146 Command Set Identifier: NVM (00h) 00:18:36.146 Deallocate: Supported 00:18:36.146 Deallocated/Unwritten Error: Not Supported 00:18:36.146 Deallocated Read Value: Unknown 00:18:36.146 Deallocate in Write Zeroes: Not Supported 00:18:36.146 Deallocated Guard Field: 0xFFFF 00:18:36.146 Flush: Supported 00:18:36.146 Reservation: Supported 00:18:36.146 Namespace Sharing Capabilities: Multiple Controllers 00:18:36.146 Size (in LBAs): 131072 (0GiB) 00:18:36.146 Capacity (in LBAs): 131072 (0GiB) 00:18:36.146 Utilization (in LBAs): 131072 (0GiB) 00:18:36.146 NGUID: ECD5F4CFE576475187E9649D08914B01 00:18:36.146 UUID: ecd5f4cf-e576-4751-87e9-649d08914b01 00:18:36.146 Thin Provisioning: Not Supported 00:18:36.146 Per-NS Atomic Units: Yes 00:18:36.146 Atomic Boundary Size (Normal): 0 00:18:36.146 Atomic Boundary Size (PFail): 0 00:18:36.146 Atomic Boundary Offset: 0 00:18:36.146 Maximum Single Source Range Length: 65535 00:18:36.146 Maximum Copy Length: 65535 00:18:36.146 Maximum Source Range Count: 1 00:18:36.146 NGUID/EUI64 Never Reused: No 00:18:36.146 Namespace Write Protected: No 00:18:36.146 Number of LBA Formats: 1 00:18:36.146 Current LBA Format: LBA Format #00 00:18:36.146 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:36.146 00:18:36.146 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:36.417 [2024-12-16 22:24:25.857029] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:41.903 Initializing NVMe Controllers 00:18:41.903 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:41.903 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:41.903 Initialization complete. Launching workers. 00:18:41.903 ======================================================== 00:18:41.903 Latency(us) 00:18:41.903 Device Information : IOPS MiB/s Average min max 00:18:41.903 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39919.29 155.93 3206.08 968.50 7601.09 00:18:41.903 ======================================================== 00:18:41.903 Total : 39919.29 155.93 3206.08 968.50 7601.09 00:18:41.903 00:18:41.903 [2024-12-16 22:24:30.875662] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:41.903 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:41.903 [2024-12-16 22:24:31.101721] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.358 Initializing NVMe Controllers 00:18:47.358 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:47.358 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:47.358 Initialization complete. Launching workers. 00:18:47.358 ======================================================== 00:18:47.358 Latency(us) 00:18:47.358 Device Information : IOPS MiB/s Average min max 00:18:47.358 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16063.71 62.75 7973.66 5983.66 9977.25 00:18:47.358 ======================================================== 00:18:47.358 Total : 16063.71 62.75 7973.66 5983.66 9977.25 00:18:47.358 00:18:47.358 [2024-12-16 22:24:36.140149] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.358 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:47.358 [2024-12-16 22:24:36.344110] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:51.727 [2024-12-16 22:24:41.413476] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:51.985 Initializing NVMe Controllers 00:18:51.985 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:51.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:51.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:51.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:51.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:51.985 Initialization complete. Launching workers. 00:18:51.985 Starting thread on core 2 00:18:51.985 Starting thread on core 3 00:18:51.985 Starting thread on core 1 00:18:51.985 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:52.243 [2024-12-16 22:24:41.711577] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:55.532 [2024-12-16 22:24:44.769932] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:55.532 Initializing NVMe Controllers 00:18:55.532 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.532 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.532 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:55.532 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:55.532 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:55.532 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:55.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:55.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:55.532 Initialization complete. Launching workers. 00:18:55.532 Starting thread on core 1 with urgent priority queue 00:18:55.532 Starting thread on core 2 with urgent priority queue 00:18:55.532 Starting thread on core 3 with urgent priority queue 00:18:55.532 Starting thread on core 0 with urgent priority queue 00:18:55.532 SPDK bdev Controller (SPDK1 ) core 0: 8653.33 IO/s 11.56 secs/100000 ios 00:18:55.532 SPDK bdev Controller (SPDK1 ) core 1: 9150.33 IO/s 10.93 secs/100000 ios 00:18:55.532 SPDK bdev Controller (SPDK1 ) core 2: 7683.00 IO/s 13.02 secs/100000 ios 00:18:55.532 SPDK bdev Controller (SPDK1 ) core 3: 8715.00 IO/s 11.47 secs/100000 ios 00:18:55.532 ======================================================== 00:18:55.532 00:18:55.532 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:55.532 [2024-12-16 22:24:45.058644] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:55.532 Initializing NVMe Controllers 00:18:55.532 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.532 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.532 Namespace ID: 1 size: 0GB 00:18:55.532 Initialization complete. 00:18:55.532 INFO: using host memory buffer for IO 00:18:55.532 Hello world! 00:18:55.532 [2024-12-16 22:24:45.092872] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:55.532 22:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:55.790 [2024-12-16 22:24:45.368654] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:56.723 Initializing NVMe Controllers 00:18:56.723 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:56.723 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:56.723 Initialization complete. Launching workers. 00:18:56.723 submit (in ns) avg, min, max = 6319.3, 3154.3, 3999486.7 00:18:56.723 complete (in ns) avg, min, max = 21832.9, 1723.8, 4005959.0 00:18:56.723 00:18:56.723 Submit histogram 00:18:56.723 ================ 00:18:56.723 Range in us Cumulative Count 00:18:56.723 3.154 - 3.170: 0.0489% ( 8) 00:18:56.723 3.170 - 3.185: 0.1467% ( 16) 00:18:56.723 3.185 - 3.200: 0.2567% ( 18) 00:18:56.723 3.200 - 3.215: 0.5440% ( 47) 00:18:56.723 3.215 - 3.230: 1.8889% ( 220) 00:18:56.723 3.230 - 3.246: 6.0456% ( 680) 00:18:56.723 3.246 - 3.261: 11.7061% ( 926) 00:18:56.723 3.261 - 3.276: 17.7272% ( 985) 00:18:56.723 3.276 - 3.291: 25.2277% ( 1227) 00:18:56.723 3.291 - 3.307: 32.0313% ( 1113) 00:18:56.723 3.307 - 3.322: 37.3434% ( 869) 00:18:56.723 3.322 - 3.337: 42.4048% ( 828) 00:18:56.723 3.337 - 3.352: 47.4601% ( 827) 00:18:56.723 3.352 - 3.368: 51.5190% ( 664) 00:18:56.723 3.368 - 3.383: 55.6391% ( 674) 00:18:56.723 3.383 - 3.398: 62.7972% ( 1171) 00:18:56.723 3.398 - 3.413: 68.3171% ( 903) 00:18:56.723 3.413 - 3.429: 74.2405% ( 969) 00:18:56.723 3.429 - 3.444: 79.6931% ( 892) 00:18:56.723 3.444 - 3.459: 83.6787% ( 652) 00:18:56.723 3.459 - 3.474: 85.9099% ( 365) 00:18:56.723 3.474 - 3.490: 86.9735% ( 174) 00:18:56.723 3.490 - 3.505: 87.7254% ( 123) 00:18:56.723 3.505 - 3.520: 88.2206% ( 81) 00:18:56.723 3.520 - 3.535: 88.8074% ( 96) 00:18:56.723 3.535 - 3.550: 89.6632% ( 140) 00:18:56.723 3.550 - 3.566: 90.5006% ( 137) 00:18:56.723 3.566 - 3.581: 91.4237% ( 151) 00:18:56.723 3.581 - 3.596: 92.2917% ( 142) 00:18:56.723 3.596 - 3.611: 93.0925% ( 131) 00:18:56.723 3.611 - 3.627: 93.9055% ( 133) 00:18:56.723 3.627 - 3.642: 94.6757% ( 126) 00:18:56.723 3.642 - 3.657: 95.5804% ( 148) 00:18:56.723 3.657 - 3.672: 96.5401% ( 157) 00:18:56.723 3.672 - 3.688: 97.3042% ( 125) 00:18:56.723 3.688 - 3.703: 97.9033% ( 98) 00:18:56.723 3.703 - 3.718: 98.3067% ( 66) 00:18:56.723 3.718 - 3.733: 98.6735% ( 60) 00:18:56.723 3.733 - 3.749: 98.9058% ( 38) 00:18:56.723 3.749 - 3.764: 99.1136% ( 34) 00:18:56.723 3.764 - 3.779: 99.3215% ( 34) 00:18:56.723 3.779 - 3.794: 99.4315% ( 18) 00:18:56.723 3.794 - 3.810: 99.5171% ( 14) 00:18:56.723 3.810 - 3.825: 99.5660% ( 8) 00:18:56.723 3.825 - 3.840: 99.6027% ( 6) 00:18:56.723 3.840 - 3.855: 99.6149% ( 2) 00:18:56.723 3.855 - 3.870: 99.6271% ( 2) 00:18:56.723 3.886 - 3.901: 99.6393% ( 2) 00:18:56.723 5.059 - 5.090: 99.6455% ( 1) 00:18:56.723 5.120 - 5.150: 99.6516% ( 1) 00:18:56.723 5.211 - 5.242: 99.6638% ( 2) 00:18:56.723 5.303 - 5.333: 99.6821% ( 3) 00:18:56.723 5.394 - 5.425: 99.6882% ( 1) 00:18:56.723 5.486 - 5.516: 99.7005% ( 2) 00:18:56.723 5.516 - 5.547: 99.7066% ( 1) 00:18:56.723 5.577 - 5.608: 99.7127% ( 1) 00:18:56.723 5.608 - 5.638: 99.7249% ( 2) 00:18:56.723 5.638 - 5.669: 99.7310% ( 1) 00:18:56.723 5.699 - 5.730: 99.7371% ( 1) 00:18:56.723 5.760 - 5.790: 99.7433% ( 1) 00:18:56.723 5.821 - 5.851: 99.7494% ( 1) 00:18:56.723 5.943 - 5.973: 99.7555% ( 1) 00:18:56.723 5.973 - 6.004: 99.7616% ( 1) 00:18:56.723 6.065 - 6.095: 99.7677% ( 1) 00:18:56.723 6.156 - 6.187: 99.7738% ( 1) 00:18:56.723 6.217 - 6.248: 99.7799% ( 1) 00:18:56.723 6.248 - 6.278: 99.7861% ( 1) 00:18:56.723 6.278 - 6.309: 99.7922% ( 1) 00:18:56.723 6.339 - 6.370: 99.8044% ( 2) 00:18:56.723 6.400 - 6.430: 99.8105% ( 1) 00:18:56.723 6.430 - 6.461: 99.8166% ( 1) 00:18:56.723 6.461 - 6.491: 99.8227% ( 1) 00:18:56.723 6.522 - 6.552: 99.8288% ( 1) 00:18:56.723 7.101 - 7.131: 99.8411% ( 2) 00:18:56.723 7.223 - 7.253: 99.8533% ( 2) 00:18:56.723 7.375 - 7.406: 99.8594% ( 1) 00:18:56.723 7.589 - 7.619: 99.8716% ( 2) 00:18:56.723 7.619 - 7.650: 99.8777% ( 1) 00:18:56.723 7.924 - 7.985: 99.8839% ( 1) 00:18:56.723 [2024-12-16 22:24:46.389667] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:56.982 7.985 - 8.046: 99.8900% ( 1) 00:18:56.982 8.046 - 8.107: 99.8961% ( 1) 00:18:56.982 8.229 - 8.290: 99.9022% ( 1) 00:18:56.982 8.350 - 8.411: 99.9083% ( 1) 00:18:56.982 13.653 - 13.714: 99.9144% ( 1) 00:18:56.982 15.116 - 15.177: 99.9205% ( 1) 00:18:56.982 19.261 - 19.383: 99.9266% ( 1) 00:18:56.982 3994.575 - 4025.783: 100.0000% ( 12) 00:18:56.982 00:18:56.982 Complete histogram 00:18:56.982 ================== 00:18:56.982 Range in us Cumulative Count 00:18:56.982 1.722 - 1.730: 0.0122% ( 2) 00:18:56.982 1.730 - 1.737: 0.0856% ( 12) 00:18:56.982 1.737 - 1.745: 0.3240% ( 39) 00:18:56.982 1.745 - 1.752: 0.7763% ( 74) 00:18:56.982 1.752 - 1.760: 1.0942% ( 52) 00:18:56.982 1.760 - 1.768: 1.1798% ( 14) 00:18:56.982 1.768 - 1.775: 1.3020% ( 20) 00:18:56.982 1.775 - 1.783: 2.4696% ( 191) 00:18:56.982 1.783 - 1.790: 10.9787% ( 1392) 00:18:56.982 1.790 - 1.798: 33.1805% ( 3632) 00:18:56.982 1.798 - 1.806: 57.3201% ( 3949) 00:18:56.982 1.806 - 1.813: 68.1643% ( 1774) 00:18:56.982 1.813 - 1.821: 72.2721% ( 672) 00:18:56.982 1.821 - 1.829: 74.7723% ( 409) 00:18:56.982 1.829 - 1.836: 76.7040% ( 316) 00:18:56.982 1.836 - 1.844: 79.6442% ( 481) 00:18:56.982 1.844 - 1.851: 85.3047% ( 926) 00:18:56.982 1.851 - 1.859: 91.0814% ( 945) 00:18:56.982 1.859 - 1.867: 94.4740% ( 555) 00:18:56.982 1.867 - 1.874: 96.1917% ( 281) 00:18:56.982 1.874 - 1.882: 97.3348% ( 187) 00:18:56.982 1.882 - 1.890: 97.9277% ( 97) 00:18:56.982 1.890 - 1.897: 98.1356% ( 34) 00:18:56.982 1.897 - 1.905: 98.3495% ( 35) 00:18:56.982 1.905 - 1.912: 98.5818% ( 38) 00:18:56.982 1.912 - 1.920: 98.7652% ( 30) 00:18:56.982 1.920 - 1.928: 98.9608% ( 32) 00:18:56.982 1.928 - 1.935: 99.0892% ( 21) 00:18:56.982 1.935 - 1.943: 99.2298% ( 23) 00:18:56.982 1.943 - 1.950: 99.2726% ( 7) 00:18:56.982 1.950 - 1.966: 99.3459% ( 12) 00:18:56.982 1.981 - 1.996: 99.3520% ( 1) 00:18:56.982 2.179 - 2.194: 99.3582% ( 1) 00:18:56.982 2.347 - 2.362: 99.3643% ( 1) 00:18:56.982 3.413 - 3.429: 99.3704% ( 1) 00:18:56.982 4.023 - 4.053: 99.3765% ( 1) 00:18:56.982 4.114 - 4.145: 99.3948% ( 3) 00:18:56.982 4.175 - 4.206: 99.4009% ( 1) 00:18:56.982 4.968 - 4.998: 99.4071% ( 1) 00:18:56.982 5.120 - 5.150: 99.4132% ( 1) 00:18:56.982 5.150 - 5.181: 99.4254% ( 2) 00:18:56.982 5.211 - 5.242: 99.4315% ( 1) 00:18:56.982 5.333 - 5.364: 99.4376% ( 1) 00:18:56.982 5.394 - 5.425: 99.4437% ( 1) 00:18:56.982 5.425 - 5.455: 99.4498% ( 1) 00:18:56.982 5.455 - 5.486: 99.4560% ( 1) 00:18:56.982 5.638 - 5.669: 99.4621% ( 1) 00:18:56.982 5.760 - 5.790: 99.4682% ( 1) 00:18:56.982 6.095 - 6.126: 99.4743% ( 1) 00:18:56.982 6.126 - 6.156: 99.4804% ( 1) 00:18:56.982 7.192 - 7.223: 99.4865% ( 1) 00:18:56.982 8.168 - 8.229: 99.4926% ( 1) 00:18:56.982 12.069 - 12.130: 99.4987% ( 1) 00:18:56.982 3869.745 - 3885.349: 99.5049% ( 1) 00:18:56.982 3978.971 - 3994.575: 99.5171% ( 2) 00:18:56.982 3994.575 - 4025.783: 100.0000% ( 79) 00:18:56.982 00:18:56.982 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:56.982 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:56.982 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:56.982 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:56.982 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:56.982 [ 00:18:56.982 { 00:18:56.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:56.982 "subtype": "Discovery", 00:18:56.982 "listen_addresses": [], 00:18:56.982 "allow_any_host": true, 00:18:56.982 "hosts": [] 00:18:56.982 }, 00:18:56.982 { 00:18:56.982 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:56.982 "subtype": "NVMe", 00:18:56.982 "listen_addresses": [ 00:18:56.982 { 00:18:56.982 "trtype": "VFIOUSER", 00:18:56.982 "adrfam": "IPv4", 00:18:56.982 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:56.982 "trsvcid": "0" 00:18:56.982 } 00:18:56.982 ], 00:18:56.982 "allow_any_host": true, 00:18:56.982 "hosts": [], 00:18:56.982 "serial_number": "SPDK1", 00:18:56.982 "model_number": "SPDK bdev Controller", 00:18:56.982 "max_namespaces": 32, 00:18:56.982 "min_cntlid": 1, 00:18:56.982 "max_cntlid": 65519, 00:18:56.982 "namespaces": [ 00:18:56.982 { 00:18:56.982 "nsid": 1, 00:18:56.982 "bdev_name": "Malloc1", 00:18:56.982 "name": "Malloc1", 00:18:56.982 "nguid": "ECD5F4CFE576475187E9649D08914B01", 00:18:56.982 "uuid": "ecd5f4cf-e576-4751-87e9-649d08914b01" 00:18:56.982 } 00:18:56.982 ] 00:18:56.982 }, 00:18:56.982 { 00:18:56.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:56.982 "subtype": "NVMe", 00:18:56.982 "listen_addresses": [ 00:18:56.982 { 00:18:56.982 "trtype": "VFIOUSER", 00:18:56.982 "adrfam": "IPv4", 00:18:56.983 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:56.983 "trsvcid": "0" 00:18:56.983 } 00:18:56.983 ], 00:18:56.983 "allow_any_host": true, 00:18:56.983 "hosts": [], 00:18:56.983 "serial_number": "SPDK2", 00:18:56.983 "model_number": "SPDK bdev Controller", 00:18:56.983 "max_namespaces": 32, 00:18:56.983 "min_cntlid": 1, 00:18:56.983 "max_cntlid": 65519, 00:18:56.983 "namespaces": [ 00:18:56.983 { 00:18:56.983 "nsid": 1, 00:18:56.983 "bdev_name": "Malloc2", 00:18:56.983 "name": "Malloc2", 00:18:56.983 "nguid": "E9CD13C199454694A7537C19150C0C67", 00:18:56.983 "uuid": "e9cd13c1-9945-4694-a753-7c19150c0c67" 00:18:56.983 } 00:18:56.983 ] 00:18:56.983 } 00:18:56.983 ] 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=300764 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:56.983 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:57.241 [2024-12-16 22:24:46.786549] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:57.241 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:57.499 Malloc3 00:18:57.499 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:57.756 [2024-12-16 22:24:47.222956] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:57.756 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:57.756 Asynchronous Event Request test 00:18:57.756 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:57.756 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:57.756 Registering asynchronous event callbacks... 00:18:57.756 Starting namespace attribute notice tests for all controllers... 00:18:57.756 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:57.756 aer_cb - Changed Namespace 00:18:57.756 Cleaning up... 00:18:57.756 [ 00:18:57.756 { 00:18:57.756 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:57.756 "subtype": "Discovery", 00:18:57.756 "listen_addresses": [], 00:18:57.756 "allow_any_host": true, 00:18:57.756 "hosts": [] 00:18:57.756 }, 00:18:57.756 { 00:18:57.756 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:57.756 "subtype": "NVMe", 00:18:57.756 "listen_addresses": [ 00:18:57.756 { 00:18:57.756 "trtype": "VFIOUSER", 00:18:57.756 "adrfam": "IPv4", 00:18:57.756 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:57.756 "trsvcid": "0" 00:18:57.756 } 00:18:57.756 ], 00:18:57.756 "allow_any_host": true, 00:18:57.756 "hosts": [], 00:18:57.756 "serial_number": "SPDK1", 00:18:57.756 "model_number": "SPDK bdev Controller", 00:18:57.756 "max_namespaces": 32, 00:18:57.756 "min_cntlid": 1, 00:18:57.756 "max_cntlid": 65519, 00:18:57.756 "namespaces": [ 00:18:57.756 { 00:18:57.757 "nsid": 1, 00:18:57.757 "bdev_name": "Malloc1", 00:18:57.757 "name": "Malloc1", 00:18:57.757 "nguid": "ECD5F4CFE576475187E9649D08914B01", 00:18:57.757 "uuid": "ecd5f4cf-e576-4751-87e9-649d08914b01" 00:18:57.757 }, 00:18:57.757 { 00:18:57.757 "nsid": 2, 00:18:57.757 "bdev_name": "Malloc3", 00:18:57.757 "name": "Malloc3", 00:18:57.757 "nguid": "851FDD0C9E8449528AC91CE600459161", 00:18:57.757 "uuid": "851fdd0c-9e84-4952-8ac9-1ce600459161" 00:18:57.757 } 00:18:57.757 ] 00:18:57.757 }, 00:18:57.757 { 00:18:57.757 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:57.757 "subtype": "NVMe", 00:18:57.757 "listen_addresses": [ 00:18:57.757 { 00:18:57.757 "trtype": "VFIOUSER", 00:18:57.757 "adrfam": "IPv4", 00:18:57.757 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:57.757 "trsvcid": "0" 00:18:57.757 } 00:18:57.757 ], 00:18:57.757 "allow_any_host": true, 00:18:57.757 "hosts": [], 00:18:57.757 "serial_number": "SPDK2", 00:18:57.757 "model_number": "SPDK bdev Controller", 00:18:57.757 "max_namespaces": 32, 00:18:57.757 "min_cntlid": 1, 00:18:57.757 "max_cntlid": 65519, 00:18:57.757 "namespaces": [ 00:18:57.757 { 00:18:57.757 "nsid": 1, 00:18:57.757 "bdev_name": "Malloc2", 00:18:57.757 "name": "Malloc2", 00:18:57.757 "nguid": "E9CD13C199454694A7537C19150C0C67", 00:18:57.757 "uuid": "e9cd13c1-9945-4694-a753-7c19150c0c67" 00:18:57.757 } 00:18:57.757 ] 00:18:57.757 } 00:18:57.757 ] 00:18:57.757 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 300764 00:18:57.757 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:57.757 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:57.757 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:57.757 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:58.016 [2024-12-16 22:24:47.477548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:58.016 [2024-12-16 22:24:47.477581] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300955 ] 00:18:58.016 [2024-12-16 22:24:47.513985] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:58.016 [2024-12-16 22:24:47.526463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:58.016 [2024-12-16 22:24:47.526484] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f416a1ed000 00:18:58.016 [2024-12-16 22:24:47.527465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.528469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.529483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.530494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.531500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.532511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.533517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:58.016 [2024-12-16 22:24:47.534524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:58.017 [2024-12-16 22:24:47.535535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:58.017 [2024-12-16 22:24:47.535545] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4168ef6000 00:18:58.017 [2024-12-16 22:24:47.536451] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:58.017 [2024-12-16 22:24:47.545754] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:58.017 [2024-12-16 22:24:47.545788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:58.017 [2024-12-16 22:24:47.550877] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:58.017 [2024-12-16 22:24:47.550911] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:58.017 [2024-12-16 22:24:47.550982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:58.017 [2024-12-16 22:24:47.550995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:58.017 [2024-12-16 22:24:47.551000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:58.017 [2024-12-16 22:24:47.551885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:58.017 [2024-12-16 22:24:47.551894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:58.017 [2024-12-16 22:24:47.551900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:58.017 [2024-12-16 22:24:47.552892] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:58.017 [2024-12-16 22:24:47.552900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:58.017 [2024-12-16 22:24:47.552907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:58.017 [2024-12-16 22:24:47.553902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:58.017 [2024-12-16 22:24:47.553911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:58.017 [2024-12-16 22:24:47.554904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:58.017 [2024-12-16 22:24:47.554912] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:58.017 [2024-12-16 22:24:47.554916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:58.017 [2024-12-16 22:24:47.554922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:58.017 [2024-12-16 22:24:47.555030] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:58.017 [2024-12-16 22:24:47.555034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:58.017 [2024-12-16 22:24:47.555039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:58.017 [2024-12-16 22:24:47.555910] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:58.017 [2024-12-16 22:24:47.556911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:58.017 [2024-12-16 22:24:47.557923] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:58.017 [2024-12-16 22:24:47.558919] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.017 [2024-12-16 22:24:47.558957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:58.017 [2024-12-16 22:24:47.559934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:58.017 [2024-12-16 22:24:47.559942] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:58.017 [2024-12-16 22:24:47.559946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.559963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:58.017 [2024-12-16 22:24:47.559974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.559984] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:58.017 [2024-12-16 22:24:47.559989] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.017 [2024-12-16 22:24:47.559992] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.017 [2024-12-16 22:24:47.560004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.017 [2024-12-16 22:24:47.566204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:58.017 [2024-12-16 22:24:47.566215] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:58.017 [2024-12-16 22:24:47.566220] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:58.017 [2024-12-16 22:24:47.566223] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:58.017 [2024-12-16 22:24:47.566228] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:58.017 [2024-12-16 22:24:47.566232] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:58.017 [2024-12-16 22:24:47.566236] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:58.017 [2024-12-16 22:24:47.566240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.566249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.566260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:58.017 [2024-12-16 22:24:47.574196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:58.017 [2024-12-16 22:24:47.574208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.017 [2024-12-16 22:24:47.574216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.017 [2024-12-16 22:24:47.574226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.017 [2024-12-16 22:24:47.574233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.017 [2024-12-16 22:24:47.574237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.574247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.574255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:58.017 [2024-12-16 22:24:47.582195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:58.017 [2024-12-16 22:24:47.582202] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:58.017 [2024-12-16 22:24:47.582207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.582214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.582219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.582227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:58.017 [2024-12-16 22:24:47.590195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:58.017 [2024-12-16 22:24:47.590246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.590255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.590262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:58.017 [2024-12-16 22:24:47.590266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:58.017 [2024-12-16 22:24:47.590269] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.017 [2024-12-16 22:24:47.590274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:58.017 [2024-12-16 22:24:47.598197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:58.017 [2024-12-16 22:24:47.598206] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:58.017 [2024-12-16 22:24:47.598214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.598220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:58.017 [2024-12-16 22:24:47.598227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:58.017 [2024-12-16 22:24:47.598230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.017 [2024-12-16 22:24:47.598233] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.017 [2024-12-16 22:24:47.598241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.017 [2024-12-16 22:24:47.606195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.606208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.606215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.606221] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:58.018 [2024-12-16 22:24:47.606225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.018 [2024-12-16 22:24:47.606228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.018 [2024-12-16 22:24:47.606233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.614195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.614204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614236] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:58.018 [2024-12-16 22:24:47.614240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:58.018 [2024-12-16 22:24:47.614245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:58.018 [2024-12-16 22:24:47.614261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.622195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.622207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.630196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.630208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.638196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.638208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.645204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.645218] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:58.018 [2024-12-16 22:24:47.645223] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:58.018 [2024-12-16 22:24:47.645226] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:58.018 [2024-12-16 22:24:47.645229] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:58.018 [2024-12-16 22:24:47.645232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:58.018 [2024-12-16 22:24:47.645238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:58.018 [2024-12-16 22:24:47.645244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:58.018 [2024-12-16 22:24:47.645248] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:58.018 [2024-12-16 22:24:47.645251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.018 [2024-12-16 22:24:47.645256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.645262] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:58.018 [2024-12-16 22:24:47.645266] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:58.018 [2024-12-16 22:24:47.645269] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.018 [2024-12-16 22:24:47.645274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.645280] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:58.018 [2024-12-16 22:24:47.645284] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:58.018 [2024-12-16 22:24:47.645286] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:58.018 [2024-12-16 22:24:47.645291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:58.018 [2024-12-16 22:24:47.653196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.653209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.653218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:58.018 [2024-12-16 22:24:47.653224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:58.018 ===================================================== 00:18:58.018 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:58.018 ===================================================== 00:18:58.018 Controller Capabilities/Features 00:18:58.018 ================================ 00:18:58.018 Vendor ID: 4e58 00:18:58.018 Subsystem Vendor ID: 4e58 00:18:58.018 Serial Number: SPDK2 00:18:58.018 Model Number: SPDK bdev Controller 00:18:58.018 Firmware Version: 25.01 00:18:58.018 Recommended Arb Burst: 6 00:18:58.018 IEEE OUI Identifier: 8d 6b 50 00:18:58.018 Multi-path I/O 00:18:58.018 May have multiple subsystem ports: Yes 00:18:58.018 May have multiple controllers: Yes 00:18:58.018 Associated with SR-IOV VF: No 00:18:58.018 Max Data Transfer Size: 131072 00:18:58.018 Max Number of Namespaces: 32 00:18:58.018 Max Number of I/O Queues: 127 00:18:58.018 NVMe Specification Version (VS): 1.3 00:18:58.018 NVMe Specification Version (Identify): 1.3 00:18:58.018 Maximum Queue Entries: 256 00:18:58.018 Contiguous Queues Required: Yes 00:18:58.018 Arbitration Mechanisms Supported 00:18:58.018 Weighted Round Robin: Not Supported 00:18:58.018 Vendor Specific: Not Supported 00:18:58.018 Reset Timeout: 15000 ms 00:18:58.018 Doorbell Stride: 4 bytes 00:18:58.018 NVM Subsystem Reset: Not Supported 00:18:58.018 Command Sets Supported 00:18:58.018 NVM Command Set: Supported 00:18:58.018 Boot Partition: Not Supported 00:18:58.018 Memory Page Size Minimum: 4096 bytes 00:18:58.018 Memory Page Size Maximum: 4096 bytes 00:18:58.018 Persistent Memory Region: Not Supported 00:18:58.018 Optional Asynchronous Events Supported 00:18:58.018 Namespace Attribute Notices: Supported 00:18:58.018 Firmware Activation Notices: Not Supported 00:18:58.018 ANA Change Notices: Not Supported 00:18:58.018 PLE Aggregate Log Change Notices: Not Supported 00:18:58.018 LBA Status Info Alert Notices: Not Supported 00:18:58.018 EGE Aggregate Log Change Notices: Not Supported 00:18:58.018 Normal NVM Subsystem Shutdown event: Not Supported 00:18:58.018 Zone Descriptor Change Notices: Not Supported 00:18:58.018 Discovery Log Change Notices: Not Supported 00:18:58.018 Controller Attributes 00:18:58.018 128-bit Host Identifier: Supported 00:18:58.018 Non-Operational Permissive Mode: Not Supported 00:18:58.018 NVM Sets: Not Supported 00:18:58.018 Read Recovery Levels: Not Supported 00:18:58.018 Endurance Groups: Not Supported 00:18:58.018 Predictable Latency Mode: Not Supported 00:18:58.018 Traffic Based Keep ALive: Not Supported 00:18:58.018 Namespace Granularity: Not Supported 00:18:58.018 SQ Associations: Not Supported 00:18:58.018 UUID List: Not Supported 00:18:58.018 Multi-Domain Subsystem: Not Supported 00:18:58.018 Fixed Capacity Management: Not Supported 00:18:58.018 Variable Capacity Management: Not Supported 00:18:58.018 Delete Endurance Group: Not Supported 00:18:58.018 Delete NVM Set: Not Supported 00:18:58.018 Extended LBA Formats Supported: Not Supported 00:18:58.018 Flexible Data Placement Supported: Not Supported 00:18:58.018 00:18:58.018 Controller Memory Buffer Support 00:18:58.018 ================================ 00:18:58.018 Supported: No 00:18:58.018 00:18:58.018 Persistent Memory Region Support 00:18:58.018 ================================ 00:18:58.018 Supported: No 00:18:58.018 00:18:58.018 Admin Command Set Attributes 00:18:58.018 ============================ 00:18:58.018 Security Send/Receive: Not Supported 00:18:58.018 Format NVM: Not Supported 00:18:58.018 Firmware Activate/Download: Not Supported 00:18:58.018 Namespace Management: Not Supported 00:18:58.018 Device Self-Test: Not Supported 00:18:58.018 Directives: Not Supported 00:18:58.018 NVMe-MI: Not Supported 00:18:58.018 Virtualization Management: Not Supported 00:18:58.018 Doorbell Buffer Config: Not Supported 00:18:58.018 Get LBA Status Capability: Not Supported 00:18:58.018 Command & Feature Lockdown Capability: Not Supported 00:18:58.018 Abort Command Limit: 4 00:18:58.018 Async Event Request Limit: 4 00:18:58.018 Number of Firmware Slots: N/A 00:18:58.018 Firmware Slot 1 Read-Only: N/A 00:18:58.018 Firmware Activation Without Reset: N/A 00:18:58.018 Multiple Update Detection Support: N/A 00:18:58.019 Firmware Update Granularity: No Information Provided 00:18:58.019 Per-Namespace SMART Log: No 00:18:58.019 Asymmetric Namespace Access Log Page: Not Supported 00:18:58.019 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:58.019 Command Effects Log Page: Supported 00:18:58.019 Get Log Page Extended Data: Supported 00:18:58.019 Telemetry Log Pages: Not Supported 00:18:58.019 Persistent Event Log Pages: Not Supported 00:18:58.019 Supported Log Pages Log Page: May Support 00:18:58.019 Commands Supported & Effects Log Page: Not Supported 00:18:58.019 Feature Identifiers & Effects Log Page:May Support 00:18:58.019 NVMe-MI Commands & Effects Log Page: May Support 00:18:58.019 Data Area 4 for Telemetry Log: Not Supported 00:18:58.019 Error Log Page Entries Supported: 128 00:18:58.019 Keep Alive: Supported 00:18:58.019 Keep Alive Granularity: 10000 ms 00:18:58.019 00:18:58.019 NVM Command Set Attributes 00:18:58.019 ========================== 00:18:58.019 Submission Queue Entry Size 00:18:58.019 Max: 64 00:18:58.019 Min: 64 00:18:58.019 Completion Queue Entry Size 00:18:58.019 Max: 16 00:18:58.019 Min: 16 00:18:58.019 Number of Namespaces: 32 00:18:58.019 Compare Command: Supported 00:18:58.019 Write Uncorrectable Command: Not Supported 00:18:58.019 Dataset Management Command: Supported 00:18:58.019 Write Zeroes Command: Supported 00:18:58.019 Set Features Save Field: Not Supported 00:18:58.019 Reservations: Not Supported 00:18:58.019 Timestamp: Not Supported 00:18:58.019 Copy: Supported 00:18:58.019 Volatile Write Cache: Present 00:18:58.019 Atomic Write Unit (Normal): 1 00:18:58.019 Atomic Write Unit (PFail): 1 00:18:58.019 Atomic Compare & Write Unit: 1 00:18:58.019 Fused Compare & Write: Supported 00:18:58.019 Scatter-Gather List 00:18:58.019 SGL Command Set: Supported (Dword aligned) 00:18:58.019 SGL Keyed: Not Supported 00:18:58.019 SGL Bit Bucket Descriptor: Not Supported 00:18:58.019 SGL Metadata Pointer: Not Supported 00:18:58.019 Oversized SGL: Not Supported 00:18:58.019 SGL Metadata Address: Not Supported 00:18:58.019 SGL Offset: Not Supported 00:18:58.019 Transport SGL Data Block: Not Supported 00:18:58.019 Replay Protected Memory Block: Not Supported 00:18:58.019 00:18:58.019 Firmware Slot Information 00:18:58.019 ========================= 00:18:58.019 Active slot: 1 00:18:58.019 Slot 1 Firmware Revision: 25.01 00:18:58.019 00:18:58.019 00:18:58.019 Commands Supported and Effects 00:18:58.019 ============================== 00:18:58.019 Admin Commands 00:18:58.019 -------------- 00:18:58.019 Get Log Page (02h): Supported 00:18:58.019 Identify (06h): Supported 00:18:58.019 Abort (08h): Supported 00:18:58.019 Set Features (09h): Supported 00:18:58.019 Get Features (0Ah): Supported 00:18:58.019 Asynchronous Event Request (0Ch): Supported 00:18:58.019 Keep Alive (18h): Supported 00:18:58.019 I/O Commands 00:18:58.019 ------------ 00:18:58.019 Flush (00h): Supported LBA-Change 00:18:58.019 Write (01h): Supported LBA-Change 00:18:58.019 Read (02h): Supported 00:18:58.019 Compare (05h): Supported 00:18:58.019 Write Zeroes (08h): Supported LBA-Change 00:18:58.019 Dataset Management (09h): Supported LBA-Change 00:18:58.019 Copy (19h): Supported LBA-Change 00:18:58.019 00:18:58.019 Error Log 00:18:58.019 ========= 00:18:58.019 00:18:58.019 Arbitration 00:18:58.019 =========== 00:18:58.019 Arbitration Burst: 1 00:18:58.019 00:18:58.019 Power Management 00:18:58.019 ================ 00:18:58.019 Number of Power States: 1 00:18:58.019 Current Power State: Power State #0 00:18:58.019 Power State #0: 00:18:58.019 Max Power: 0.00 W 00:18:58.019 Non-Operational State: Operational 00:18:58.019 Entry Latency: Not Reported 00:18:58.019 Exit Latency: Not Reported 00:18:58.019 Relative Read Throughput: 0 00:18:58.019 Relative Read Latency: 0 00:18:58.019 Relative Write Throughput: 0 00:18:58.019 Relative Write Latency: 0 00:18:58.019 Idle Power: Not Reported 00:18:58.019 Active Power: Not Reported 00:18:58.019 Non-Operational Permissive Mode: Not Supported 00:18:58.019 00:18:58.019 Health Information 00:18:58.019 ================== 00:18:58.019 Critical Warnings: 00:18:58.019 Available Spare Space: OK 00:18:58.019 Temperature: OK 00:18:58.019 Device Reliability: OK 00:18:58.019 Read Only: No 00:18:58.019 Volatile Memory Backup: OK 00:18:58.019 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:58.019 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:58.019 Available Spare: 0% 00:18:58.019 Available Sp[2024-12-16 22:24:47.653306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:58.019 [2024-12-16 22:24:47.661198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:58.019 [2024-12-16 22:24:47.661228] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:58.019 [2024-12-16 22:24:47.661237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.019 [2024-12-16 22:24:47.661243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.019 [2024-12-16 22:24:47.661248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.019 [2024-12-16 22:24:47.661255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.019 [2024-12-16 22:24:47.661306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:58.019 [2024-12-16 22:24:47.661316] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:58.019 [2024-12-16 22:24:47.662306] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.019 [2024-12-16 22:24:47.662347] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:58.019 [2024-12-16 22:24:47.662354] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:58.019 [2024-12-16 22:24:47.663314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:58.019 [2024-12-16 22:24:47.663325] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:58.019 [2024-12-16 22:24:47.663376] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:58.019 [2024-12-16 22:24:47.666199] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:58.019 are Threshold: 0% 00:18:58.019 Life Percentage Used: 0% 00:18:58.019 Data Units Read: 0 00:18:58.019 Data Units Written: 0 00:18:58.019 Host Read Commands: 0 00:18:58.019 Host Write Commands: 0 00:18:58.019 Controller Busy Time: 0 minutes 00:18:58.019 Power Cycles: 0 00:18:58.019 Power On Hours: 0 hours 00:18:58.019 Unsafe Shutdowns: 0 00:18:58.019 Unrecoverable Media Errors: 0 00:18:58.019 Lifetime Error Log Entries: 0 00:18:58.019 Warning Temperature Time: 0 minutes 00:18:58.019 Critical Temperature Time: 0 minutes 00:18:58.019 00:18:58.019 Number of Queues 00:18:58.019 ================ 00:18:58.019 Number of I/O Submission Queues: 127 00:18:58.019 Number of I/O Completion Queues: 127 00:18:58.019 00:18:58.019 Active Namespaces 00:18:58.019 ================= 00:18:58.019 Namespace ID:1 00:18:58.019 Error Recovery Timeout: Unlimited 00:18:58.019 Command Set Identifier: NVM (00h) 00:18:58.019 Deallocate: Supported 00:18:58.019 Deallocated/Unwritten Error: Not Supported 00:18:58.019 Deallocated Read Value: Unknown 00:18:58.019 Deallocate in Write Zeroes: Not Supported 00:18:58.019 Deallocated Guard Field: 0xFFFF 00:18:58.019 Flush: Supported 00:18:58.019 Reservation: Supported 00:18:58.019 Namespace Sharing Capabilities: Multiple Controllers 00:18:58.019 Size (in LBAs): 131072 (0GiB) 00:18:58.019 Capacity (in LBAs): 131072 (0GiB) 00:18:58.019 Utilization (in LBAs): 131072 (0GiB) 00:18:58.019 NGUID: E9CD13C199454694A7537C19150C0C67 00:18:58.019 UUID: e9cd13c1-9945-4694-a753-7c19150c0c67 00:18:58.019 Thin Provisioning: Not Supported 00:18:58.019 Per-NS Atomic Units: Yes 00:18:58.019 Atomic Boundary Size (Normal): 0 00:18:58.019 Atomic Boundary Size (PFail): 0 00:18:58.019 Atomic Boundary Offset: 0 00:18:58.019 Maximum Single Source Range Length: 65535 00:18:58.019 Maximum Copy Length: 65535 00:18:58.019 Maximum Source Range Count: 1 00:18:58.019 NGUID/EUI64 Never Reused: No 00:18:58.019 Namespace Write Protected: No 00:18:58.019 Number of LBA Formats: 1 00:18:58.019 Current LBA Format: LBA Format #00 00:18:58.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:58.019 00:18:58.019 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:58.277 [2024-12-16 22:24:47.897373] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:03.539 Initializing NVMe Controllers 00:19:03.539 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:03.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:03.539 Initialization complete. Launching workers. 00:19:03.539 ======================================================== 00:19:03.539 Latency(us) 00:19:03.539 Device Information : IOPS MiB/s Average min max 00:19:03.539 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.40 156.01 3205.15 982.40 9601.39 00:19:03.539 ======================================================== 00:19:03.539 Total : 39937.40 156.01 3205.15 982.40 9601.39 00:19:03.539 00:19:03.539 [2024-12-16 22:24:52.995456] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:03.539 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:03.539 [2024-12-16 22:24:53.225115] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:08.803 Initializing NVMe Controllers 00:19:08.803 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:08.803 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:08.803 Initialization complete. Launching workers. 00:19:08.803 ======================================================== 00:19:08.803 Latency(us) 00:19:08.803 Device Information : IOPS MiB/s Average min max 00:19:08.803 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39924.18 155.95 3205.68 974.72 7606.77 00:19:08.804 ======================================================== 00:19:08.804 Total : 39924.18 155.95 3205.68 974.72 7606.77 00:19:08.804 00:19:08.804 [2024-12-16 22:24:58.243559] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:08.804 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:08.804 [2024-12-16 22:24:58.446483] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:14.067 [2024-12-16 22:25:03.585290] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:14.067 Initializing NVMe Controllers 00:19:14.067 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:14.067 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:14.067 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:14.067 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:14.067 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:14.068 Initialization complete. Launching workers. 00:19:14.068 Starting thread on core 2 00:19:14.068 Starting thread on core 3 00:19:14.068 Starting thread on core 1 00:19:14.068 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:14.326 [2024-12-16 22:25:03.878569] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.609 [2024-12-16 22:25:06.936444] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:17.609 Initializing NVMe Controllers 00:19:17.609 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:17.609 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:17.609 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:17.609 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:17.609 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:17.609 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:17.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:17.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:17.609 Initialization complete. Launching workers. 00:19:17.609 Starting thread on core 1 with urgent priority queue 00:19:17.609 Starting thread on core 2 with urgent priority queue 00:19:17.609 Starting thread on core 3 with urgent priority queue 00:19:17.609 Starting thread on core 0 with urgent priority queue 00:19:17.609 SPDK bdev Controller (SPDK2 ) core 0: 5903.67 IO/s 16.94 secs/100000 ios 00:19:17.609 SPDK bdev Controller (SPDK2 ) core 1: 5178.33 IO/s 19.31 secs/100000 ios 00:19:17.609 SPDK bdev Controller (SPDK2 ) core 2: 6398.67 IO/s 15.63 secs/100000 ios 00:19:17.609 SPDK bdev Controller (SPDK2 ) core 3: 5911.67 IO/s 16.92 secs/100000 ios 00:19:17.609 ======================================================== 00:19:17.609 00:19:17.609 22:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:17.609 [2024-12-16 22:25:07.209624] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.609 Initializing NVMe Controllers 00:19:17.609 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:17.609 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:17.609 Namespace ID: 1 size: 0GB 00:19:17.609 Initialization complete. 00:19:17.609 INFO: using host memory buffer for IO 00:19:17.609 Hello world! 00:19:17.609 [2024-12-16 22:25:07.222715] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:17.609 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:17.867 [2024-12-16 22:25:07.494529] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:19.240 Initializing NVMe Controllers 00:19:19.240 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:19.240 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:19.240 Initialization complete. Launching workers. 00:19:19.240 submit (in ns) avg, min, max = 6537.8, 3130.5, 3999761.0 00:19:19.240 complete (in ns) avg, min, max = 19336.9, 1718.1, 4170631.4 00:19:19.240 00:19:19.240 Submit histogram 00:19:19.240 ================ 00:19:19.240 Range in us Cumulative Count 00:19:19.240 3.124 - 3.139: 0.0122% ( 2) 00:19:19.240 3.139 - 3.154: 0.0670% ( 9) 00:19:19.240 3.154 - 3.170: 0.1278% ( 10) 00:19:19.240 3.170 - 3.185: 0.1643% ( 6) 00:19:19.240 3.185 - 3.200: 0.3713% ( 34) 00:19:19.240 3.200 - 3.215: 1.5703% ( 197) 00:19:19.240 3.215 - 3.230: 5.5204% ( 649) 00:19:19.240 3.230 - 3.246: 10.7730% ( 863) 00:19:19.240 3.246 - 3.261: 16.3299% ( 913) 00:19:19.240 3.261 - 3.276: 23.5240% ( 1182) 00:19:19.240 3.276 - 3.291: 31.3025% ( 1278) 00:19:19.240 3.291 - 3.307: 37.2550% ( 978) 00:19:19.240 3.307 - 3.322: 42.3250% ( 833) 00:19:19.240 3.322 - 3.337: 47.1089% ( 786) 00:19:19.240 3.337 - 3.352: 51.6616% ( 748) 00:19:19.240 3.352 - 3.368: 55.6604% ( 657) 00:19:19.240 3.368 - 3.383: 61.8138% ( 1011) 00:19:19.240 3.383 - 3.398: 67.6506% ( 959) 00:19:19.240 3.398 - 3.413: 72.8606% ( 856) 00:19:19.240 3.413 - 3.429: 78.6062% ( 944) 00:19:19.240 3.429 - 3.444: 82.4650% ( 634) 00:19:19.240 3.444 - 3.459: 84.9970% ( 416) 00:19:19.240 3.459 - 3.474: 86.3542% ( 223) 00:19:19.240 3.474 - 3.490: 87.2976% ( 155) 00:19:19.240 3.490 - 3.505: 87.8637% ( 93) 00:19:19.240 3.505 - 3.520: 88.4662% ( 99) 00:19:19.240 3.520 - 3.535: 89.1783% ( 117) 00:19:19.240 3.535 - 3.550: 90.0243% ( 139) 00:19:19.240 3.550 - 3.566: 90.9617% ( 154) 00:19:19.240 3.566 - 3.581: 91.8320% ( 143) 00:19:19.240 3.581 - 3.596: 92.6598% ( 136) 00:19:19.240 3.596 - 3.611: 93.5058% ( 139) 00:19:19.240 3.611 - 3.627: 94.3214% ( 134) 00:19:19.240 3.627 - 3.642: 95.3074% ( 162) 00:19:19.240 3.642 - 3.657: 96.1716% ( 142) 00:19:19.240 3.657 - 3.672: 96.9385% ( 126) 00:19:19.240 3.672 - 3.688: 97.7419% ( 132) 00:19:19.240 3.688 - 3.703: 98.1802% ( 72) 00:19:19.240 3.703 - 3.718: 98.6062% ( 70) 00:19:19.240 3.718 - 3.733: 98.8375% ( 38) 00:19:19.240 3.733 - 3.749: 99.0505% ( 35) 00:19:19.240 3.749 - 3.764: 99.2575% ( 34) 00:19:19.240 3.764 - 3.779: 99.3914% ( 22) 00:19:19.241 3.779 - 3.794: 99.4766% ( 14) 00:19:19.241 3.794 - 3.810: 99.5253% ( 8) 00:19:19.241 3.810 - 3.825: 99.5496% ( 4) 00:19:19.241 3.825 - 3.840: 99.5800% ( 5) 00:19:19.241 3.840 - 3.855: 99.6044% ( 4) 00:19:19.241 3.870 - 3.886: 99.6105% ( 1) 00:19:19.241 3.901 - 3.931: 99.6166% ( 1) 00:19:19.241 4.907 - 4.937: 99.6226% ( 1) 00:19:19.241 4.968 - 4.998: 99.6287% ( 1) 00:19:19.241 5.029 - 5.059: 99.6348% ( 1) 00:19:19.241 5.303 - 5.333: 99.6409% ( 1) 00:19:19.241 5.333 - 5.364: 99.6470% ( 1) 00:19:19.241 5.394 - 5.425: 99.6531% ( 1) 00:19:19.241 5.425 - 5.455: 99.6652% ( 2) 00:19:19.241 5.455 - 5.486: 99.6835% ( 3) 00:19:19.241 5.547 - 5.577: 99.6896% ( 1) 00:19:19.241 5.577 - 5.608: 99.7018% ( 2) 00:19:19.241 5.638 - 5.669: 99.7079% ( 1) 00:19:19.241 5.851 - 5.882: 99.7200% ( 2) 00:19:19.241 6.004 - 6.034: 99.7322% ( 2) 00:19:19.241 6.065 - 6.095: 99.7383% ( 1) 00:19:19.241 6.187 - 6.217: 99.7505% ( 2) 00:19:19.241 6.278 - 6.309: 99.7565% ( 1) 00:19:19.241 6.370 - 6.400: 99.7626% ( 1) 00:19:19.241 6.400 - 6.430: 99.7748% ( 2) 00:19:19.241 6.430 - 6.461: 99.7870% ( 2) 00:19:19.241 6.613 - 6.644: 99.7931% ( 1) 00:19:19.241 6.644 - 6.674: 99.8052% ( 2) 00:19:19.241 6.766 - 6.796: 99.8113% ( 1) 00:19:19.241 7.070 - 7.101: 99.8174% ( 1) 00:19:19.241 7.101 - 7.131: 99.8235% ( 1) 00:19:19.241 7.192 - 7.223: 99.8296% ( 1) 00:19:19.241 7.253 - 7.284: 99.8357% ( 1) 00:19:19.241 7.467 - 7.497: 99.8418% ( 1) 00:19:19.241 7.558 - 7.589: 99.8478% ( 1) 00:19:19.241 7.650 - 7.680: 99.8539% ( 1) 00:19:19.241 [2024-12-16 22:25:08.592204] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:19.241 7.741 - 7.771: 99.8600% ( 1) 00:19:19.241 7.802 - 7.863: 99.8661% ( 1) 00:19:19.241 7.985 - 8.046: 99.8722% ( 1) 00:19:19.241 8.168 - 8.229: 99.8783% ( 1) 00:19:19.241 8.350 - 8.411: 99.8844% ( 1) 00:19:19.241 8.533 - 8.594: 99.8904% ( 1) 00:19:19.241 8.777 - 8.838: 99.8965% ( 1) 00:19:19.241 9.265 - 9.326: 99.9026% ( 1) 00:19:19.241 9.813 - 9.874: 99.9087% ( 1) 00:19:19.241 13.592 - 13.653: 99.9148% ( 1) 00:19:19.241 15.604 - 15.726: 99.9209% ( 1) 00:19:19.241 3994.575 - 4025.783: 100.0000% ( 13) 00:19:19.241 00:19:19.241 Complete histogram 00:19:19.241 ================== 00:19:19.241 Range in us Cumulative Count 00:19:19.241 1.714 - 1.722: 0.0122% ( 2) 00:19:19.241 1.722 - 1.730: 0.1096% ( 16) 00:19:19.241 1.730 - 1.737: 0.3226% ( 35) 00:19:19.241 1.737 - 1.745: 0.4626% ( 23) 00:19:19.241 1.745 - 1.752: 0.4991% ( 6) 00:19:19.241 1.752 - 1.760: 0.5173% ( 3) 00:19:19.241 1.760 - 1.768: 0.7243% ( 34) 00:19:19.241 1.768 - 1.775: 3.5971% ( 472) 00:19:19.241 1.775 - 1.783: 16.1047% ( 2055) 00:19:19.241 1.783 - 1.790: 34.3761% ( 3002) 00:19:19.241 1.790 - 1.798: 46.4881% ( 1990) 00:19:19.241 1.798 - 1.806: 50.7060% ( 693) 00:19:19.241 1.806 - 1.813: 53.2197% ( 413) 00:19:19.241 1.813 - 1.821: 54.5953% ( 226) 00:19:19.241 1.821 - 1.829: 55.9586% ( 224) 00:19:19.241 1.829 - 1.836: 61.1077% ( 846) 00:19:19.241 1.836 - 1.844: 73.5971% ( 2052) 00:19:19.241 1.844 - 1.851: 86.5003% ( 2120) 00:19:19.241 1.851 - 1.859: 92.8484% ( 1043) 00:19:19.241 1.859 - 1.867: 95.2526% ( 395) 00:19:19.241 1.867 - 1.874: 96.6890% ( 236) 00:19:19.241 1.874 - 1.882: 97.6080% ( 151) 00:19:19.241 1.882 - 1.890: 97.9489% ( 56) 00:19:19.241 1.890 - 1.897: 98.1741% ( 37) 00:19:19.241 1.897 - 1.905: 98.3628% ( 31) 00:19:19.241 1.905 - 1.912: 98.6610% ( 49) 00:19:19.241 1.912 - 1.920: 98.8923% ( 38) 00:19:19.241 1.920 - 1.928: 99.0323% ( 23) 00:19:19.241 1.928 - 1.935: 99.1479% ( 19) 00:19:19.241 1.935 - 1.943: 99.2209% ( 12) 00:19:19.241 1.943 - 1.950: 99.2757% ( 9) 00:19:19.241 1.950 - 1.966: 99.3366% ( 10) 00:19:19.241 1.966 - 1.981: 99.3548% ( 3) 00:19:19.241 1.981 - 1.996: 99.3792% ( 4) 00:19:19.241 2.042 - 2.057: 99.3853% ( 1) 00:19:19.241 2.057 - 2.072: 99.3914% ( 1) 00:19:19.241 2.133 - 2.149: 99.3974% ( 1) 00:19:19.241 2.179 - 2.194: 99.4035% ( 1) 00:19:19.241 2.286 - 2.301: 99.4096% ( 1) 00:19:19.241 2.499 - 2.514: 99.4157% ( 1) 00:19:19.241 3.733 - 3.749: 99.4218% ( 1) 00:19:19.241 3.992 - 4.023: 99.4279% ( 1) 00:19:19.241 4.236 - 4.267: 99.4340% ( 1) 00:19:19.241 4.267 - 4.297: 99.4400% ( 1) 00:19:19.241 4.389 - 4.419: 99.4461% ( 1) 00:19:19.241 4.480 - 4.510: 99.4522% ( 1) 00:19:19.241 4.602 - 4.632: 99.4583% ( 1) 00:19:19.241 4.846 - 4.876: 99.4644% ( 1) 00:19:19.241 4.937 - 4.968: 99.4705% ( 1) 00:19:19.241 5.059 - 5.090: 99.4766% ( 1) 00:19:19.241 5.150 - 5.181: 99.4827% ( 1) 00:19:19.241 5.242 - 5.272: 99.4887% ( 1) 00:19:19.241 5.303 - 5.333: 99.4948% ( 1) 00:19:19.241 5.577 - 5.608: 99.5009% ( 1) 00:19:19.241 5.699 - 5.730: 99.5131% ( 2) 00:19:19.241 5.790 - 5.821: 99.5192% ( 1) 00:19:19.241 5.821 - 5.851: 99.5253% ( 1) 00:19:19.241 6.034 - 6.065: 99.5313% ( 1) 00:19:19.241 6.888 - 6.918: 99.5374% ( 1) 00:19:19.241 7.589 - 7.619: 99.5435% ( 1) 00:19:19.241 12.130 - 12.190: 99.5496% ( 1) 00:19:19.241 12.373 - 12.434: 99.5557% ( 1) 00:19:19.241 17.798 - 17.920: 99.5618% ( 1) 00:19:19.241 3994.575 - 4025.783: 99.9939% ( 71) 00:19:19.241 4150.613 - 4181.821: 100.0000% ( 1) 00:19:19.241 00:19:19.241 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:19.241 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:19.241 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:19.241 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:19.241 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:19.241 [ 00:19:19.241 { 00:19:19.241 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:19.241 "subtype": "Discovery", 00:19:19.241 "listen_addresses": [], 00:19:19.241 "allow_any_host": true, 00:19:19.241 "hosts": [] 00:19:19.241 }, 00:19:19.241 { 00:19:19.241 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:19.241 "subtype": "NVMe", 00:19:19.241 "listen_addresses": [ 00:19:19.241 { 00:19:19.241 "trtype": "VFIOUSER", 00:19:19.241 "adrfam": "IPv4", 00:19:19.241 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:19.241 "trsvcid": "0" 00:19:19.241 } 00:19:19.241 ], 00:19:19.241 "allow_any_host": true, 00:19:19.241 "hosts": [], 00:19:19.241 "serial_number": "SPDK1", 00:19:19.241 "model_number": "SPDK bdev Controller", 00:19:19.241 "max_namespaces": 32, 00:19:19.241 "min_cntlid": 1, 00:19:19.241 "max_cntlid": 65519, 00:19:19.241 "namespaces": [ 00:19:19.241 { 00:19:19.241 "nsid": 1, 00:19:19.241 "bdev_name": "Malloc1", 00:19:19.241 "name": "Malloc1", 00:19:19.241 "nguid": "ECD5F4CFE576475187E9649D08914B01", 00:19:19.241 "uuid": "ecd5f4cf-e576-4751-87e9-649d08914b01" 00:19:19.241 }, 00:19:19.241 { 00:19:19.241 "nsid": 2, 00:19:19.241 "bdev_name": "Malloc3", 00:19:19.241 "name": "Malloc3", 00:19:19.241 "nguid": "851FDD0C9E8449528AC91CE600459161", 00:19:19.241 "uuid": "851fdd0c-9e84-4952-8ac9-1ce600459161" 00:19:19.241 } 00:19:19.241 ] 00:19:19.241 }, 00:19:19.241 { 00:19:19.241 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:19.241 "subtype": "NVMe", 00:19:19.241 "listen_addresses": [ 00:19:19.241 { 00:19:19.241 "trtype": "VFIOUSER", 00:19:19.241 "adrfam": "IPv4", 00:19:19.241 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:19.241 "trsvcid": "0" 00:19:19.241 } 00:19:19.241 ], 00:19:19.241 "allow_any_host": true, 00:19:19.242 "hosts": [], 00:19:19.242 "serial_number": "SPDK2", 00:19:19.242 "model_number": "SPDK bdev Controller", 00:19:19.242 "max_namespaces": 32, 00:19:19.242 "min_cntlid": 1, 00:19:19.242 "max_cntlid": 65519, 00:19:19.242 "namespaces": [ 00:19:19.242 { 00:19:19.242 "nsid": 1, 00:19:19.242 "bdev_name": "Malloc2", 00:19:19.242 "name": "Malloc2", 00:19:19.242 "nguid": "E9CD13C199454694A7537C19150C0C67", 00:19:19.242 "uuid": "e9cd13c1-9945-4694-a753-7c19150c0c67" 00:19:19.242 } 00:19:19.242 ] 00:19:19.242 } 00:19:19.242 ] 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=304339 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:19.242 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:19.500 [2024-12-16 22:25:08.973454] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:19.500 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:19.500 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:19.500 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:19.500 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:19.500 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:19.758 Malloc4 00:19:19.758 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:19.758 [2024-12-16 22:25:09.431914] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:19.758 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:20.016 Asynchronous Event Request test 00:19:20.016 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:20.016 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:20.016 Registering asynchronous event callbacks... 00:19:20.016 Starting namespace attribute notice tests for all controllers... 00:19:20.016 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:20.016 aer_cb - Changed Namespace 00:19:20.016 Cleaning up... 00:19:20.016 [ 00:19:20.016 { 00:19:20.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:20.016 "subtype": "Discovery", 00:19:20.016 "listen_addresses": [], 00:19:20.016 "allow_any_host": true, 00:19:20.016 "hosts": [] 00:19:20.016 }, 00:19:20.016 { 00:19:20.016 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:20.016 "subtype": "NVMe", 00:19:20.016 "listen_addresses": [ 00:19:20.016 { 00:19:20.016 "trtype": "VFIOUSER", 00:19:20.016 "adrfam": "IPv4", 00:19:20.016 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:20.016 "trsvcid": "0" 00:19:20.016 } 00:19:20.016 ], 00:19:20.016 "allow_any_host": true, 00:19:20.016 "hosts": [], 00:19:20.016 "serial_number": "SPDK1", 00:19:20.016 "model_number": "SPDK bdev Controller", 00:19:20.016 "max_namespaces": 32, 00:19:20.016 "min_cntlid": 1, 00:19:20.016 "max_cntlid": 65519, 00:19:20.016 "namespaces": [ 00:19:20.016 { 00:19:20.016 "nsid": 1, 00:19:20.016 "bdev_name": "Malloc1", 00:19:20.016 "name": "Malloc1", 00:19:20.016 "nguid": "ECD5F4CFE576475187E9649D08914B01", 00:19:20.016 "uuid": "ecd5f4cf-e576-4751-87e9-649d08914b01" 00:19:20.016 }, 00:19:20.016 { 00:19:20.016 "nsid": 2, 00:19:20.016 "bdev_name": "Malloc3", 00:19:20.016 "name": "Malloc3", 00:19:20.016 "nguid": "851FDD0C9E8449528AC91CE600459161", 00:19:20.016 "uuid": "851fdd0c-9e84-4952-8ac9-1ce600459161" 00:19:20.016 } 00:19:20.016 ] 00:19:20.016 }, 00:19:20.016 { 00:19:20.016 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:20.016 "subtype": "NVMe", 00:19:20.016 "listen_addresses": [ 00:19:20.016 { 00:19:20.016 "trtype": "VFIOUSER", 00:19:20.016 "adrfam": "IPv4", 00:19:20.016 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:20.016 "trsvcid": "0" 00:19:20.016 } 00:19:20.016 ], 00:19:20.016 "allow_any_host": true, 00:19:20.016 "hosts": [], 00:19:20.016 "serial_number": "SPDK2", 00:19:20.016 "model_number": "SPDK bdev Controller", 00:19:20.016 "max_namespaces": 32, 00:19:20.016 "min_cntlid": 1, 00:19:20.016 "max_cntlid": 65519, 00:19:20.016 "namespaces": [ 00:19:20.016 { 00:19:20.016 "nsid": 1, 00:19:20.016 "bdev_name": "Malloc2", 00:19:20.016 "name": "Malloc2", 00:19:20.016 "nguid": "E9CD13C199454694A7537C19150C0C67", 00:19:20.016 "uuid": "e9cd13c1-9945-4694-a753-7c19150c0c67" 00:19:20.016 }, 00:19:20.016 { 00:19:20.016 "nsid": 2, 00:19:20.016 "bdev_name": "Malloc4", 00:19:20.016 "name": "Malloc4", 00:19:20.016 "nguid": "8E2456C8476C40CE855B0A63991467D6", 00:19:20.016 "uuid": "8e2456c8-476c-40ce-855b-0a63991467d6" 00:19:20.016 } 00:19:20.016 ] 00:19:20.016 } 00:19:20.016 ] 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 304339 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296764 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296764 ']' 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296764 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.016 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296764 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296764' 00:19:20.276 killing process with pid 296764 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296764 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296764 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=304579 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 304579' 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:20.276 Process pid: 304579 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 304579 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 304579 ']' 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.276 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:20.535 [2024-12-16 22:25:10.013266] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:20.535 [2024-12-16 22:25:10.014133] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:20.535 [2024-12-16 22:25:10.014176] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.535 [2024-12-16 22:25:10.089690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.535 [2024-12-16 22:25:10.111880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.535 [2024-12-16 22:25:10.111919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.535 [2024-12-16 22:25:10.111926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.535 [2024-12-16 22:25:10.111932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.535 [2024-12-16 22:25:10.111937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.535 [2024-12-16 22:25:10.113374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.535 [2024-12-16 22:25:10.113412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.535 [2024-12-16 22:25:10.113521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.535 [2024-12-16 22:25:10.113523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.535 [2024-12-16 22:25:10.177641] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:20.535 [2024-12-16 22:25:10.178578] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:20.535 [2024-12-16 22:25:10.178688] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:20.535 [2024-12-16 22:25:10.179104] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:20.535 [2024-12-16 22:25:10.179149] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:20.535 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.535 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:20.535 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:21.919 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:21.919 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:21.919 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:21.919 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:21.919 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:21.919 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:22.177 Malloc1 00:19:22.177 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:22.177 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:22.435 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:22.694 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:22.694 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:22.694 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:22.951 Malloc2 00:19:22.952 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:23.210 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:23.210 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 304579 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 304579 ']' 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 304579 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304579 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304579' 00:19:23.468 killing process with pid 304579 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 304579 00:19:23.468 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 304579 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:23.728 00:19:23.728 real 0m51.139s 00:19:23.728 user 3m17.931s 00:19:23.728 sys 0m3.333s 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:23.728 ************************************ 00:19:23.728 END TEST nvmf_vfio_user 00:19:23.728 ************************************ 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:23.728 ************************************ 00:19:23.728 START TEST nvmf_vfio_user_nvme_compliance 00:19:23.728 ************************************ 00:19:23.728 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:23.988 * Looking for test storage... 00:19:23.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:23.988 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.989 --rc genhtml_branch_coverage=1 00:19:23.989 --rc genhtml_function_coverage=1 00:19:23.989 --rc genhtml_legend=1 00:19:23.989 --rc geninfo_all_blocks=1 00:19:23.989 --rc geninfo_unexecuted_blocks=1 00:19:23.989 00:19:23.989 ' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.989 --rc genhtml_branch_coverage=1 00:19:23.989 --rc genhtml_function_coverage=1 00:19:23.989 --rc genhtml_legend=1 00:19:23.989 --rc geninfo_all_blocks=1 00:19:23.989 --rc geninfo_unexecuted_blocks=1 00:19:23.989 00:19:23.989 ' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.989 --rc genhtml_branch_coverage=1 00:19:23.989 --rc genhtml_function_coverage=1 00:19:23.989 --rc genhtml_legend=1 00:19:23.989 --rc geninfo_all_blocks=1 00:19:23.989 --rc geninfo_unexecuted_blocks=1 00:19:23.989 00:19:23.989 ' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:23.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.989 --rc genhtml_branch_coverage=1 00:19:23.989 --rc genhtml_function_coverage=1 00:19:23.989 --rc genhtml_legend=1 00:19:23.989 --rc geninfo_all_blocks=1 00:19:23.989 --rc geninfo_unexecuted_blocks=1 00:19:23.989 00:19:23.989 ' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:23.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=305157 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 305157' 00:19:23.989 Process pid: 305157 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 305157 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 305157 ']' 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.989 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:23.989 [2024-12-16 22:25:13.647756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:23.989 [2024-12-16 22:25:13.647804] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.249 [2024-12-16 22:25:13.720420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.249 [2024-12-16 22:25:13.742442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.249 [2024-12-16 22:25:13.742481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.249 [2024-12-16 22:25:13.742488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.249 [2024-12-16 22:25:13.742494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.249 [2024-12-16 22:25:13.742499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.249 [2024-12-16 22:25:13.743792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.249 [2024-12-16 22:25:13.743831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.249 [2024-12-16 22:25:13.743832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.249 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.249 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:24.249 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.184 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.442 malloc0 00:19:25.442 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.442 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.443 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:25.443 00:19:25.443 00:19:25.443 CUnit - A unit testing framework for C - Version 2.1-3 00:19:25.443 http://cunit.sourceforge.net/ 00:19:25.443 00:19:25.443 00:19:25.443 Suite: nvme_compliance 00:19:25.443 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-16 22:25:15.074684] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.443 [2024-12-16 22:25:15.076034] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:25.443 [2024-12-16 22:25:15.076051] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:25.443 [2024-12-16 22:25:15.076057] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:25.443 [2024-12-16 22:25:15.077700] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.443 passed 00:19:25.700 Test: admin_identify_ctrlr_verify_fused ...[2024-12-16 22:25:15.157299] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.700 [2024-12-16 22:25:15.160317] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.700 passed 00:19:25.700 Test: admin_identify_ns ...[2024-12-16 22:25:15.236447] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.700 [2024-12-16 22:25:15.300206] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:25.700 [2024-12-16 22:25:15.308202] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:25.700 [2024-12-16 22:25:15.329291] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.700 passed 00:19:25.958 Test: admin_get_features_mandatory_features ...[2024-12-16 22:25:15.405156] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.958 [2024-12-16 22:25:15.408172] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.958 passed 00:19:25.958 Test: admin_get_features_optional_features ...[2024-12-16 22:25:15.482675] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.958 [2024-12-16 22:25:15.485697] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.958 passed 00:19:25.958 Test: admin_set_features_number_of_queues ...[2024-12-16 22:25:15.561353] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.216 [2024-12-16 22:25:15.668293] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.216 passed 00:19:26.216 Test: admin_get_log_page_mandatory_logs ...[2024-12-16 22:25:15.742055] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.216 [2024-12-16 22:25:15.745080] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.216 passed 00:19:26.216 Test: admin_get_log_page_with_lpo ...[2024-12-16 22:25:15.821740] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.216 [2024-12-16 22:25:15.889203] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:26.216 [2024-12-16 22:25:15.902259] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.475 passed 00:19:26.475 Test: fabric_property_get ...[2024-12-16 22:25:15.976005] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.475 [2024-12-16 22:25:15.977244] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:26.475 [2024-12-16 22:25:15.979032] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.475 passed 00:19:26.475 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-16 22:25:16.056545] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.475 [2024-12-16 22:25:16.057778] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:26.475 [2024-12-16 22:25:16.059567] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.475 passed 00:19:26.475 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-16 22:25:16.137234] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.734 [2024-12-16 22:25:16.222202] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:26.734 [2024-12-16 22:25:16.238197] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:26.734 [2024-12-16 22:25:16.243285] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.734 passed 00:19:26.734 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-16 22:25:16.317077] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.734 [2024-12-16 22:25:16.318315] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:26.734 [2024-12-16 22:25:16.322107] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.734 passed 00:19:26.734 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-16 22:25:16.397716] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.992 [2024-12-16 22:25:16.477206] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:26.992 [2024-12-16 22:25:16.501199] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:26.992 [2024-12-16 22:25:16.506281] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.992 passed 00:19:26.992 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-16 22:25:16.580059] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:26.992 [2024-12-16 22:25:16.581300] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:26.992 [2024-12-16 22:25:16.581329] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:26.992 [2024-12-16 22:25:16.583082] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:26.992 passed 00:19:26.992 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-16 22:25:16.659815] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.250 [2024-12-16 22:25:16.751210] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:27.250 [2024-12-16 22:25:16.759232] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:27.250 [2024-12-16 22:25:16.767201] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:27.250 [2024-12-16 22:25:16.775201] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:27.250 [2024-12-16 22:25:16.804283] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.250 passed 00:19:27.250 Test: admin_create_io_sq_verify_pc ...[2024-12-16 22:25:16.878008] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.250 [2024-12-16 22:25:16.893206] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:27.250 [2024-12-16 22:25:16.911178] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.250 passed 00:19:27.509 Test: admin_create_io_qp_max_qps ...[2024-12-16 22:25:16.989697] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:28.444 [2024-12-16 22:25:18.093202] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:29.011 [2024-12-16 22:25:18.479497] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.011 passed 00:19:29.011 Test: admin_create_io_sq_shared_cq ...[2024-12-16 22:25:18.553368] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.011 [2024-12-16 22:25:18.689199] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:29.270 [2024-12-16 22:25:18.726257] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.270 passed 00:19:29.270 00:19:29.270 Run Summary: Type Total Ran Passed Failed Inactive 00:19:29.270 suites 1 1 n/a 0 0 00:19:29.270 tests 18 18 18 0 0 00:19:29.270 asserts 360 360 360 0 n/a 00:19:29.270 00:19:29.270 Elapsed time = 1.498 seconds 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 305157 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 305157 ']' 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 305157 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305157 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305157' 00:19:29.270 killing process with pid 305157 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 305157 00:19:29.270 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 305157 00:19:29.529 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:29.529 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:29.529 00:19:29.529 real 0m5.607s 00:19:29.529 user 0m15.733s 00:19:29.529 sys 0m0.507s 00:19:29.529 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.529 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.529 ************************************ 00:19:29.529 END TEST nvmf_vfio_user_nvme_compliance 00:19:29.529 ************************************ 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:29.529 ************************************ 00:19:29.529 START TEST nvmf_vfio_user_fuzz 00:19:29.529 ************************************ 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:29.529 * Looking for test storage... 00:19:29.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.529 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.789 --rc genhtml_branch_coverage=1 00:19:29.789 --rc genhtml_function_coverage=1 00:19:29.789 --rc genhtml_legend=1 00:19:29.789 --rc geninfo_all_blocks=1 00:19:29.789 --rc geninfo_unexecuted_blocks=1 00:19:29.789 00:19:29.789 ' 00:19:29.789 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.790 --rc genhtml_branch_coverage=1 00:19:29.790 --rc genhtml_function_coverage=1 00:19:29.790 --rc genhtml_legend=1 00:19:29.790 --rc geninfo_all_blocks=1 00:19:29.790 --rc geninfo_unexecuted_blocks=1 00:19:29.790 00:19:29.790 ' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.790 --rc genhtml_branch_coverage=1 00:19:29.790 --rc genhtml_function_coverage=1 00:19:29.790 --rc genhtml_legend=1 00:19:29.790 --rc geninfo_all_blocks=1 00:19:29.790 --rc geninfo_unexecuted_blocks=1 00:19:29.790 00:19:29.790 ' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.790 --rc genhtml_branch_coverage=1 00:19:29.790 --rc genhtml_function_coverage=1 00:19:29.790 --rc genhtml_legend=1 00:19:29.790 --rc geninfo_all_blocks=1 00:19:29.790 --rc geninfo_unexecuted_blocks=1 00:19:29.790 00:19:29.790 ' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=306182 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 306182' 00:19:29.790 Process pid: 306182 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 306182 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 306182 ']' 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.790 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.049 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.049 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:30.049 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.985 malloc0 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:30.985 22:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:03.060 Fuzzing completed. Shutting down the fuzz application 00:20:03.060 00:20:03.060 Dumping successful admin opcodes: 00:20:03.060 9, 10, 00:20:03.060 Dumping successful io opcodes: 00:20:03.060 0, 00:20:03.060 NS: 0x20000081ef00 I/O qp, Total commands completed: 1080063, total successful commands: 4256, random_seed: 2663209984 00:20:03.061 NS: 0x20000081ef00 admin qp, Total commands completed: 264624, total successful commands: 62, random_seed: 663330112 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 306182 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 306182 ']' 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 306182 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306182 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.061 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306182' 00:20:03.061 killing process with pid 306182 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 306182 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 306182 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:03.061 00:20:03.061 real 0m32.181s 00:20:03.061 user 0m34.334s 00:20:03.061 sys 0m26.466s 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.061 ************************************ 00:20:03.061 END TEST nvmf_vfio_user_fuzz 00:20:03.061 ************************************ 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:03.061 ************************************ 00:20:03.061 START TEST nvmf_auth_target 00:20:03.061 ************************************ 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:03.061 * Looking for test storage... 00:20:03.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.061 --rc genhtml_branch_coverage=1 00:20:03.061 --rc genhtml_function_coverage=1 00:20:03.061 --rc genhtml_legend=1 00:20:03.061 --rc geninfo_all_blocks=1 00:20:03.061 --rc geninfo_unexecuted_blocks=1 00:20:03.061 00:20:03.061 ' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.061 --rc genhtml_branch_coverage=1 00:20:03.061 --rc genhtml_function_coverage=1 00:20:03.061 --rc genhtml_legend=1 00:20:03.061 --rc geninfo_all_blocks=1 00:20:03.061 --rc geninfo_unexecuted_blocks=1 00:20:03.061 00:20:03.061 ' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.061 --rc genhtml_branch_coverage=1 00:20:03.061 --rc genhtml_function_coverage=1 00:20:03.061 --rc genhtml_legend=1 00:20:03.061 --rc geninfo_all_blocks=1 00:20:03.061 --rc geninfo_unexecuted_blocks=1 00:20:03.061 00:20:03.061 ' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:03.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.061 --rc genhtml_branch_coverage=1 00:20:03.061 --rc genhtml_function_coverage=1 00:20:03.061 --rc genhtml_legend=1 00:20:03.061 --rc geninfo_all_blocks=1 00:20:03.061 --rc geninfo_unexecuted_blocks=1 00:20:03.061 00:20:03.061 ' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.061 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.062 22:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:08.339 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:08.339 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:08.339 Found net devices under 0000:af:00.0: cvl_0_0 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:08.339 Found net devices under 0000:af:00.1: cvl_0_1 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.339 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:20:08.340 00:20:08.340 --- 10.0.0.2 ping statistics --- 00:20:08.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.340 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:20:08.340 00:20:08.340 --- 10.0.0.1 ping statistics --- 00:20:08.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.340 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=314381 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 314381 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314381 ']' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=314403 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=880e8df984d4170f5f3c98d31376ef77f1044d70b0cae463 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FNk 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 880e8df984d4170f5f3c98d31376ef77f1044d70b0cae463 0 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 880e8df984d4170f5f3c98d31376ef77f1044d70b0cae463 0 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=880e8df984d4170f5f3c98d31376ef77f1044d70b0cae463 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FNk 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FNk 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.FNk 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d2b0dd2f7eeae1921a8ad02a02fb6736478edea8b1aaeba50afe8aef1043d538 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uxB 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d2b0dd2f7eeae1921a8ad02a02fb6736478edea8b1aaeba50afe8aef1043d538 3 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d2b0dd2f7eeae1921a8ad02a02fb6736478edea8b1aaeba50afe8aef1043d538 3 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d2b0dd2f7eeae1921a8ad02a02fb6736478edea8b1aaeba50afe8aef1043d538 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uxB 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uxB 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.uxB 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=06dbf24416bc294331051c2510814c91 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AlP 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 06dbf24416bc294331051c2510814c91 1 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 06dbf24416bc294331051c2510814c91 1 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=06dbf24416bc294331051c2510814c91 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AlP 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AlP 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.AlP 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:08.340 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea2bbcb39aa864dd7412c3bf3a5f6b83a075c66157345137 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2qy 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea2bbcb39aa864dd7412c3bf3a5f6b83a075c66157345137 2 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea2bbcb39aa864dd7412c3bf3a5f6b83a075c66157345137 2 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea2bbcb39aa864dd7412c3bf3a5f6b83a075c66157345137 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2qy 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2qy 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.2qy 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b5776d9906e6974e55905d8170e192db9c6416be78697a7 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZfZ 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b5776d9906e6974e55905d8170e192db9c6416be78697a7 2 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b5776d9906e6974e55905d8170e192db9c6416be78697a7 2 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b5776d9906e6974e55905d8170e192db9c6416be78697a7 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZfZ 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZfZ 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ZfZ 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fcddd888facb7e7e2c62c5e3386f6a7d 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TPw 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fcddd888facb7e7e2c62c5e3386f6a7d 1 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fcddd888facb7e7e2c62c5e3386f6a7d 1 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.341 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.341 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fcddd888facb7e7e2c62c5e3386f6a7d 00:20:08.341 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:08.341 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TPw 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TPw 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.TPw 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7674175d3988d395932f75782f07e6ea38dc646338f3d90155a95494dee4fd84 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Aqg 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7674175d3988d395932f75782f07e6ea38dc646338f3d90155a95494dee4fd84 3 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7674175d3988d395932f75782f07e6ea38dc646338f3d90155a95494dee4fd84 3 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7674175d3988d395932f75782f07e6ea38dc646338f3d90155a95494dee4fd84 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Aqg 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Aqg 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Aqg 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 314381 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314381 ']' 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.600 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 314403 /var/tmp/host.sock 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314403 ']' 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:08.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FNk 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FNk 00:20:08.859 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FNk 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.uxB ]] 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uxB 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uxB 00:20:09.117 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uxB 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AlP 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AlP 00:20:09.376 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AlP 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.2qy ]] 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2qy 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2qy 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2qy 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZfZ 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.635 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZfZ 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZfZ 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.TPw ]] 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPw 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPw 00:20:09.894 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPw 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Aqg 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Aqg 00:20:10.153 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Aqg 00:20:10.411 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:10.411 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:10.411 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.411 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.411 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.411 22:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.670 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.670 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.928 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.928 { 00:20:10.928 "cntlid": 1, 00:20:10.928 "qid": 0, 00:20:10.928 "state": "enabled", 00:20:10.928 "thread": "nvmf_tgt_poll_group_000", 00:20:10.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.928 "listen_address": { 00:20:10.928 "trtype": "TCP", 00:20:10.929 "adrfam": "IPv4", 00:20:10.929 "traddr": "10.0.0.2", 00:20:10.929 "trsvcid": "4420" 00:20:10.929 }, 00:20:10.929 "peer_address": { 00:20:10.929 "trtype": "TCP", 00:20:10.929 "adrfam": "IPv4", 00:20:10.929 "traddr": "10.0.0.1", 00:20:10.929 "trsvcid": "57164" 00:20:10.929 }, 00:20:10.929 "auth": { 00:20:10.929 "state": "completed", 00:20:10.929 "digest": "sha256", 00:20:10.929 "dhgroup": "null" 00:20:10.929 } 00:20:10.929 } 00:20:10.929 ]' 00:20:10.929 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.187 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.445 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:11.445 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.731 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.990 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.249 00:20:15.249 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.249 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.249 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.508 { 00:20:15.508 "cntlid": 3, 00:20:15.508 "qid": 0, 00:20:15.508 "state": "enabled", 00:20:15.508 "thread": "nvmf_tgt_poll_group_000", 00:20:15.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.508 "listen_address": { 00:20:15.508 "trtype": "TCP", 00:20:15.508 "adrfam": "IPv4", 00:20:15.508 "traddr": "10.0.0.2", 00:20:15.508 "trsvcid": "4420" 00:20:15.508 }, 00:20:15.508 "peer_address": { 00:20:15.508 "trtype": "TCP", 00:20:15.508 "adrfam": "IPv4", 00:20:15.508 "traddr": "10.0.0.1", 00:20:15.508 "trsvcid": "53940" 00:20:15.508 }, 00:20:15.508 "auth": { 00:20:15.508 "state": "completed", 00:20:15.508 "digest": "sha256", 00:20:15.508 "dhgroup": "null" 00:20:15.508 } 00:20:15.508 } 00:20:15.508 ]' 00:20:15.508 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.508 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.766 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:15.766 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.332 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.591 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.850 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.850 { 00:20:16.850 "cntlid": 5, 00:20:16.850 "qid": 0, 00:20:16.850 "state": "enabled", 00:20:16.850 "thread": "nvmf_tgt_poll_group_000", 00:20:16.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.850 "listen_address": { 00:20:16.850 "trtype": "TCP", 00:20:16.850 "adrfam": "IPv4", 00:20:16.850 "traddr": "10.0.0.2", 00:20:16.850 "trsvcid": "4420" 00:20:16.850 }, 00:20:16.850 "peer_address": { 00:20:16.850 "trtype": "TCP", 00:20:16.850 "adrfam": "IPv4", 00:20:16.850 "traddr": "10.0.0.1", 00:20:16.850 "trsvcid": "53970" 00:20:16.850 }, 00:20:16.850 "auth": { 00:20:16.850 "state": "completed", 00:20:16.850 "digest": "sha256", 00:20:16.850 "dhgroup": "null" 00:20:16.850 } 00:20:16.850 } 00:20:16.850 ]' 00:20:16.850 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.109 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.367 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:17.367 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.934 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.193 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.193 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.451 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.451 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.451 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.451 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.451 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.451 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.451 { 00:20:18.451 "cntlid": 7, 00:20:18.451 "qid": 0, 00:20:18.451 "state": "enabled", 00:20:18.451 "thread": "nvmf_tgt_poll_group_000", 00:20:18.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.451 "listen_address": { 00:20:18.451 "trtype": "TCP", 00:20:18.452 "adrfam": "IPv4", 00:20:18.452 "traddr": "10.0.0.2", 00:20:18.452 "trsvcid": "4420" 00:20:18.452 }, 00:20:18.452 "peer_address": { 00:20:18.452 "trtype": "TCP", 00:20:18.452 "adrfam": "IPv4", 00:20:18.452 "traddr": "10.0.0.1", 00:20:18.452 "trsvcid": "53998" 00:20:18.452 }, 00:20:18.452 "auth": { 00:20:18.452 "state": "completed", 00:20:18.452 "digest": "sha256", 00:20:18.452 "dhgroup": "null" 00:20:18.452 } 00:20:18.452 } 00:20:18.452 ]' 00:20:18.452 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.452 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.452 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:18.710 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.277 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.536 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.795 00:20:19.795 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.795 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.795 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.053 { 00:20:20.053 "cntlid": 9, 00:20:20.053 "qid": 0, 00:20:20.053 "state": "enabled", 00:20:20.053 "thread": "nvmf_tgt_poll_group_000", 00:20:20.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.053 "listen_address": { 00:20:20.053 "trtype": "TCP", 00:20:20.053 "adrfam": "IPv4", 00:20:20.053 "traddr": "10.0.0.2", 00:20:20.053 "trsvcid": "4420" 00:20:20.053 }, 00:20:20.053 "peer_address": { 00:20:20.053 "trtype": "TCP", 00:20:20.053 "adrfam": "IPv4", 00:20:20.053 "traddr": "10.0.0.1", 00:20:20.053 "trsvcid": "54026" 00:20:20.053 }, 00:20:20.053 "auth": { 00:20:20.053 "state": "completed", 00:20:20.053 "digest": "sha256", 00:20:20.053 "dhgroup": "ffdhe2048" 00:20:20.053 } 00:20:20.053 } 00:20:20.053 ]' 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.053 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.054 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.312 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:20.312 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.877 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.136 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.395 00:20:21.395 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.395 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.395 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.653 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.653 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.653 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.653 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.653 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.653 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.653 { 00:20:21.653 "cntlid": 11, 00:20:21.653 "qid": 0, 00:20:21.653 "state": "enabled", 00:20:21.654 "thread": "nvmf_tgt_poll_group_000", 00:20:21.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.654 "listen_address": { 00:20:21.654 "trtype": "TCP", 00:20:21.654 "adrfam": "IPv4", 00:20:21.654 "traddr": "10.0.0.2", 00:20:21.654 "trsvcid": "4420" 00:20:21.654 }, 00:20:21.654 "peer_address": { 00:20:21.654 "trtype": "TCP", 00:20:21.654 "adrfam": "IPv4", 00:20:21.654 "traddr": "10.0.0.1", 00:20:21.654 "trsvcid": "54034" 00:20:21.654 }, 00:20:21.654 "auth": { 00:20:21.654 "state": "completed", 00:20:21.654 "digest": "sha256", 00:20:21.654 "dhgroup": "ffdhe2048" 00:20:21.654 } 00:20:21.654 } 00:20:21.654 ]' 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.654 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.912 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:21.912 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.478 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.737 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.995 00:20:22.995 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.995 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.995 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.254 { 00:20:23.254 "cntlid": 13, 00:20:23.254 "qid": 0, 00:20:23.254 "state": "enabled", 00:20:23.254 "thread": "nvmf_tgt_poll_group_000", 00:20:23.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.254 "listen_address": { 00:20:23.254 "trtype": "TCP", 00:20:23.254 "adrfam": "IPv4", 00:20:23.254 "traddr": "10.0.0.2", 00:20:23.254 "trsvcid": "4420" 00:20:23.254 }, 00:20:23.254 "peer_address": { 00:20:23.254 "trtype": "TCP", 00:20:23.254 "adrfam": "IPv4", 00:20:23.254 "traddr": "10.0.0.1", 00:20:23.254 "trsvcid": "46126" 00:20:23.254 }, 00:20:23.254 "auth": { 00:20:23.254 "state": "completed", 00:20:23.254 "digest": "sha256", 00:20:23.254 "dhgroup": "ffdhe2048" 00:20:23.254 } 00:20:23.254 } 00:20:23.254 ]' 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.254 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.512 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:23.512 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.080 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.339 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.598 00:20:24.598 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.598 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.598 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.857 { 00:20:24.857 "cntlid": 15, 00:20:24.857 "qid": 0, 00:20:24.857 "state": "enabled", 00:20:24.857 "thread": "nvmf_tgt_poll_group_000", 00:20:24.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.857 "listen_address": { 00:20:24.857 "trtype": "TCP", 00:20:24.857 "adrfam": "IPv4", 00:20:24.857 "traddr": "10.0.0.2", 00:20:24.857 "trsvcid": "4420" 00:20:24.857 }, 00:20:24.857 "peer_address": { 00:20:24.857 "trtype": "TCP", 00:20:24.857 "adrfam": "IPv4", 00:20:24.857 "traddr": "10.0.0.1", 00:20:24.857 "trsvcid": "46160" 00:20:24.857 }, 00:20:24.857 "auth": { 00:20:24.857 "state": "completed", 00:20:24.857 "digest": "sha256", 00:20:24.857 "dhgroup": "ffdhe2048" 00:20:24.857 } 00:20:24.857 } 00:20:24.857 ]' 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.857 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.116 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:25.116 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.683 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.941 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.200 00:20:26.200 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.200 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.200 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.458 { 00:20:26.458 "cntlid": 17, 00:20:26.458 "qid": 0, 00:20:26.458 "state": "enabled", 00:20:26.458 "thread": "nvmf_tgt_poll_group_000", 00:20:26.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.458 "listen_address": { 00:20:26.458 "trtype": "TCP", 00:20:26.458 "adrfam": "IPv4", 00:20:26.458 "traddr": "10.0.0.2", 00:20:26.458 "trsvcid": "4420" 00:20:26.458 }, 00:20:26.458 "peer_address": { 00:20:26.458 "trtype": "TCP", 00:20:26.458 "adrfam": "IPv4", 00:20:26.458 "traddr": "10.0.0.1", 00:20:26.458 "trsvcid": "46176" 00:20:26.458 }, 00:20:26.458 "auth": { 00:20:26.458 "state": "completed", 00:20:26.458 "digest": "sha256", 00:20:26.458 "dhgroup": "ffdhe3072" 00:20:26.458 } 00:20:26.458 } 00:20:26.458 ]' 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.458 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.458 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.458 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.458 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.458 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.458 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.718 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:26.718 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.285 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.543 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.802 00:20:27.802 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.802 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.802 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.060 { 00:20:28.060 "cntlid": 19, 00:20:28.060 "qid": 0, 00:20:28.060 "state": "enabled", 00:20:28.060 "thread": "nvmf_tgt_poll_group_000", 00:20:28.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.060 "listen_address": { 00:20:28.060 "trtype": "TCP", 00:20:28.060 "adrfam": "IPv4", 00:20:28.060 "traddr": "10.0.0.2", 00:20:28.060 "trsvcid": "4420" 00:20:28.060 }, 00:20:28.060 "peer_address": { 00:20:28.060 "trtype": "TCP", 00:20:28.060 "adrfam": "IPv4", 00:20:28.060 "traddr": "10.0.0.1", 00:20:28.060 "trsvcid": "46204" 00:20:28.060 }, 00:20:28.060 "auth": { 00:20:28.060 "state": "completed", 00:20:28.060 "digest": "sha256", 00:20:28.060 "dhgroup": "ffdhe3072" 00:20:28.060 } 00:20:28.060 } 00:20:28.060 ]' 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.060 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.319 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:28.320 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.887 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.146 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.405 00:20:29.405 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.405 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.405 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.405 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.405 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.405 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.405 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.405 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.405 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.405 { 00:20:29.405 "cntlid": 21, 00:20:29.405 "qid": 0, 00:20:29.405 "state": "enabled", 00:20:29.405 "thread": "nvmf_tgt_poll_group_000", 00:20:29.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.405 "listen_address": { 00:20:29.406 "trtype": "TCP", 00:20:29.406 "adrfam": "IPv4", 00:20:29.406 "traddr": "10.0.0.2", 00:20:29.406 "trsvcid": "4420" 00:20:29.406 }, 00:20:29.406 "peer_address": { 00:20:29.406 "trtype": "TCP", 00:20:29.406 "adrfam": "IPv4", 00:20:29.406 "traddr": "10.0.0.1", 00:20:29.406 "trsvcid": "46222" 00:20:29.406 }, 00:20:29.406 "auth": { 00:20:29.406 "state": "completed", 00:20:29.406 "digest": "sha256", 00:20:29.406 "dhgroup": "ffdhe3072" 00:20:29.406 } 00:20:29.406 } 00:20:29.406 ]' 00:20:29.406 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.664 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.923 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:29.923 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:30.490 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.490 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.490 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.491 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.491 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.491 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.491 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.491 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.491 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.750 00:20:30.750 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.750 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.750 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.009 { 00:20:31.009 "cntlid": 23, 00:20:31.009 "qid": 0, 00:20:31.009 "state": "enabled", 00:20:31.009 "thread": "nvmf_tgt_poll_group_000", 00:20:31.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.009 "listen_address": { 00:20:31.009 "trtype": "TCP", 00:20:31.009 "adrfam": "IPv4", 00:20:31.009 "traddr": "10.0.0.2", 00:20:31.009 "trsvcid": "4420" 00:20:31.009 }, 00:20:31.009 "peer_address": { 00:20:31.009 "trtype": "TCP", 00:20:31.009 "adrfam": "IPv4", 00:20:31.009 "traddr": "10.0.0.1", 00:20:31.009 "trsvcid": "46252" 00:20:31.009 }, 00:20:31.009 "auth": { 00:20:31.009 "state": "completed", 00:20:31.009 "digest": "sha256", 00:20:31.009 "dhgroup": "ffdhe3072" 00:20:31.009 } 00:20:31.009 } 00:20:31.009 ]' 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.009 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.267 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.267 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.267 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.267 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.267 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.526 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:31.526 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.094 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.353 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.612 { 00:20:32.612 "cntlid": 25, 00:20:32.612 "qid": 0, 00:20:32.612 "state": "enabled", 00:20:32.612 "thread": "nvmf_tgt_poll_group_000", 00:20:32.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.612 "listen_address": { 00:20:32.612 "trtype": "TCP", 00:20:32.612 "adrfam": "IPv4", 00:20:32.612 "traddr": "10.0.0.2", 00:20:32.612 "trsvcid": "4420" 00:20:32.612 }, 00:20:32.612 "peer_address": { 00:20:32.612 "trtype": "TCP", 00:20:32.612 "adrfam": "IPv4", 00:20:32.612 "traddr": "10.0.0.1", 00:20:32.612 "trsvcid": "46274" 00:20:32.612 }, 00:20:32.612 "auth": { 00:20:32.612 "state": "completed", 00:20:32.612 "digest": "sha256", 00:20:32.612 "dhgroup": "ffdhe4096" 00:20:32.612 } 00:20:32.612 } 00:20:32.612 ]' 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.612 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.871 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.871 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.871 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.871 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.871 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.130 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:33.130 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.697 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.956 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.215 { 00:20:34.215 "cntlid": 27, 00:20:34.215 "qid": 0, 00:20:34.215 "state": "enabled", 00:20:34.215 "thread": "nvmf_tgt_poll_group_000", 00:20:34.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.215 "listen_address": { 00:20:34.215 "trtype": "TCP", 00:20:34.215 "adrfam": "IPv4", 00:20:34.215 "traddr": "10.0.0.2", 00:20:34.215 "trsvcid": "4420" 00:20:34.215 }, 00:20:34.215 "peer_address": { 00:20:34.215 "trtype": "TCP", 00:20:34.215 "adrfam": "IPv4", 00:20:34.215 "traddr": "10.0.0.1", 00:20:34.215 "trsvcid": "34938" 00:20:34.215 }, 00:20:34.215 "auth": { 00:20:34.215 "state": "completed", 00:20:34.215 "digest": "sha256", 00:20:34.215 "dhgroup": "ffdhe4096" 00:20:34.215 } 00:20:34.215 } 00:20:34.215 ]' 00:20:34.215 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.474 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.474 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.474 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.474 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.474 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.474 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.474 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.733 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:34.733 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.301 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.561 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.819 { 00:20:35.819 "cntlid": 29, 00:20:35.819 "qid": 0, 00:20:35.819 "state": "enabled", 00:20:35.819 "thread": "nvmf_tgt_poll_group_000", 00:20:35.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.819 "listen_address": { 00:20:35.819 "trtype": "TCP", 00:20:35.819 "adrfam": "IPv4", 00:20:35.819 "traddr": "10.0.0.2", 00:20:35.819 "trsvcid": "4420" 00:20:35.819 }, 00:20:35.819 "peer_address": { 00:20:35.819 "trtype": "TCP", 00:20:35.819 "adrfam": "IPv4", 00:20:35.819 "traddr": "10.0.0.1", 00:20:35.819 "trsvcid": "34962" 00:20:35.819 }, 00:20:35.819 "auth": { 00:20:35.819 "state": "completed", 00:20:35.819 "digest": "sha256", 00:20:35.819 "dhgroup": "ffdhe4096" 00:20:35.819 } 00:20:35.819 } 00:20:35.819 ]' 00:20:35.819 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.078 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.336 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:36.336 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.904 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.163 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.163 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.163 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.163 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.422 00:20:37.422 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.422 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.422 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.422 { 00:20:37.422 "cntlid": 31, 00:20:37.422 "qid": 0, 00:20:37.422 "state": "enabled", 00:20:37.422 "thread": "nvmf_tgt_poll_group_000", 00:20:37.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.422 "listen_address": { 00:20:37.422 "trtype": "TCP", 00:20:37.422 "adrfam": "IPv4", 00:20:37.422 "traddr": "10.0.0.2", 00:20:37.422 "trsvcid": "4420" 00:20:37.422 }, 00:20:37.422 "peer_address": { 00:20:37.422 "trtype": "TCP", 00:20:37.422 "adrfam": "IPv4", 00:20:37.422 "traddr": "10.0.0.1", 00:20:37.422 "trsvcid": "34988" 00:20:37.422 }, 00:20:37.422 "auth": { 00:20:37.422 "state": "completed", 00:20:37.422 "digest": "sha256", 00:20:37.422 "dhgroup": "ffdhe4096" 00:20:37.422 } 00:20:37.422 } 00:20:37.422 ]' 00:20:37.422 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.681 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.940 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:37.940 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.508 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.766 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.025 00:20:39.025 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.025 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.025 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.284 { 00:20:39.284 "cntlid": 33, 00:20:39.284 "qid": 0, 00:20:39.284 "state": "enabled", 00:20:39.284 "thread": "nvmf_tgt_poll_group_000", 00:20:39.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.284 "listen_address": { 00:20:39.284 "trtype": "TCP", 00:20:39.284 "adrfam": "IPv4", 00:20:39.284 "traddr": "10.0.0.2", 00:20:39.284 "trsvcid": "4420" 00:20:39.284 }, 00:20:39.284 "peer_address": { 00:20:39.284 "trtype": "TCP", 00:20:39.284 "adrfam": "IPv4", 00:20:39.284 "traddr": "10.0.0.1", 00:20:39.284 "trsvcid": "35016" 00:20:39.284 }, 00:20:39.284 "auth": { 00:20:39.284 "state": "completed", 00:20:39.284 "digest": "sha256", 00:20:39.284 "dhgroup": "ffdhe6144" 00:20:39.284 } 00:20:39.284 } 00:20:39.284 ]' 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.284 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.285 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.285 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.285 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.285 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.285 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.543 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:39.543 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.110 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.368 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.627 00:20:40.627 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.627 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.627 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.886 { 00:20:40.886 "cntlid": 35, 00:20:40.886 "qid": 0, 00:20:40.886 "state": "enabled", 00:20:40.886 "thread": "nvmf_tgt_poll_group_000", 00:20:40.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.886 "listen_address": { 00:20:40.886 "trtype": "TCP", 00:20:40.886 "adrfam": "IPv4", 00:20:40.886 "traddr": "10.0.0.2", 00:20:40.886 "trsvcid": "4420" 00:20:40.886 }, 00:20:40.886 "peer_address": { 00:20:40.886 "trtype": "TCP", 00:20:40.886 "adrfam": "IPv4", 00:20:40.886 "traddr": "10.0.0.1", 00:20:40.886 "trsvcid": "35044" 00:20:40.886 }, 00:20:40.886 "auth": { 00:20:40.886 "state": "completed", 00:20:40.886 "digest": "sha256", 00:20:40.886 "dhgroup": "ffdhe6144" 00:20:40.886 } 00:20:40.886 } 00:20:40.886 ]' 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.886 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.145 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.145 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.145 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.145 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:41.145 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.712 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.971 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.229 00:20:42.229 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.229 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.229 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.488 { 00:20:42.488 "cntlid": 37, 00:20:42.488 "qid": 0, 00:20:42.488 "state": "enabled", 00:20:42.488 "thread": "nvmf_tgt_poll_group_000", 00:20:42.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.488 "listen_address": { 00:20:42.488 "trtype": "TCP", 00:20:42.488 "adrfam": "IPv4", 00:20:42.488 "traddr": "10.0.0.2", 00:20:42.488 "trsvcid": "4420" 00:20:42.488 }, 00:20:42.488 "peer_address": { 00:20:42.488 "trtype": "TCP", 00:20:42.488 "adrfam": "IPv4", 00:20:42.488 "traddr": "10.0.0.1", 00:20:42.488 "trsvcid": "35068" 00:20:42.488 }, 00:20:42.488 "auth": { 00:20:42.488 "state": "completed", 00:20:42.488 "digest": "sha256", 00:20:42.488 "dhgroup": "ffdhe6144" 00:20:42.488 } 00:20:42.488 } 00:20:42.488 ]' 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.488 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.747 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.747 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.747 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.747 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.747 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.006 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:43.006 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.573 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.140 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.140 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.140 { 00:20:44.140 "cntlid": 39, 00:20:44.141 "qid": 0, 00:20:44.141 "state": "enabled", 00:20:44.141 "thread": "nvmf_tgt_poll_group_000", 00:20:44.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.141 "listen_address": { 00:20:44.141 "trtype": "TCP", 00:20:44.141 "adrfam": "IPv4", 00:20:44.141 "traddr": "10.0.0.2", 00:20:44.141 "trsvcid": "4420" 00:20:44.141 }, 00:20:44.141 "peer_address": { 00:20:44.141 "trtype": "TCP", 00:20:44.141 "adrfam": "IPv4", 00:20:44.141 "traddr": "10.0.0.1", 00:20:44.141 "trsvcid": "44706" 00:20:44.141 }, 00:20:44.141 "auth": { 00:20:44.141 "state": "completed", 00:20:44.141 "digest": "sha256", 00:20:44.141 "dhgroup": "ffdhe6144" 00:20:44.141 } 00:20:44.141 } 00:20:44.141 ]' 00:20:44.141 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.399 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.658 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:44.658 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:45.226 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.226 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.226 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.227 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.794 00:20:45.794 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.794 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.794 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.052 { 00:20:46.052 "cntlid": 41, 00:20:46.052 "qid": 0, 00:20:46.052 "state": "enabled", 00:20:46.052 "thread": "nvmf_tgt_poll_group_000", 00:20:46.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.052 "listen_address": { 00:20:46.052 "trtype": "TCP", 00:20:46.052 "adrfam": "IPv4", 00:20:46.052 "traddr": "10.0.0.2", 00:20:46.052 "trsvcid": "4420" 00:20:46.052 }, 00:20:46.052 "peer_address": { 00:20:46.052 "trtype": "TCP", 00:20:46.052 "adrfam": "IPv4", 00:20:46.052 "traddr": "10.0.0.1", 00:20:46.052 "trsvcid": "44730" 00:20:46.052 }, 00:20:46.052 "auth": { 00:20:46.052 "state": "completed", 00:20:46.052 "digest": "sha256", 00:20:46.052 "dhgroup": "ffdhe8192" 00:20:46.052 } 00:20:46.052 } 00:20:46.052 ]' 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.052 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.311 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:46.311 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.879 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.138 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.706 00:20:47.706 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.706 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.706 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.964 { 00:20:47.964 "cntlid": 43, 00:20:47.964 "qid": 0, 00:20:47.964 "state": "enabled", 00:20:47.964 "thread": "nvmf_tgt_poll_group_000", 00:20:47.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.964 "listen_address": { 00:20:47.964 "trtype": "TCP", 00:20:47.964 "adrfam": "IPv4", 00:20:47.964 "traddr": "10.0.0.2", 00:20:47.964 "trsvcid": "4420" 00:20:47.964 }, 00:20:47.964 "peer_address": { 00:20:47.964 "trtype": "TCP", 00:20:47.964 "adrfam": "IPv4", 00:20:47.964 "traddr": "10.0.0.1", 00:20:47.964 "trsvcid": "44746" 00:20:47.964 }, 00:20:47.964 "auth": { 00:20:47.964 "state": "completed", 00:20:47.964 "digest": "sha256", 00:20:47.964 "dhgroup": "ffdhe8192" 00:20:47.964 } 00:20:47.964 } 00:20:47.964 ]' 00:20:47.964 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.965 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.223 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:48.223 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.791 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.050 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.309 00:20:49.309 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.309 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.309 22:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.568 { 00:20:49.568 "cntlid": 45, 00:20:49.568 "qid": 0, 00:20:49.568 "state": "enabled", 00:20:49.568 "thread": "nvmf_tgt_poll_group_000", 00:20:49.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.568 "listen_address": { 00:20:49.568 "trtype": "TCP", 00:20:49.568 "adrfam": "IPv4", 00:20:49.568 "traddr": "10.0.0.2", 00:20:49.568 "trsvcid": "4420" 00:20:49.568 }, 00:20:49.568 "peer_address": { 00:20:49.568 "trtype": "TCP", 00:20:49.568 "adrfam": "IPv4", 00:20:49.568 "traddr": "10.0.0.1", 00:20:49.568 "trsvcid": "44778" 00:20:49.568 }, 00:20:49.568 "auth": { 00:20:49.568 "state": "completed", 00:20:49.568 "digest": "sha256", 00:20:49.568 "dhgroup": "ffdhe8192" 00:20:49.568 } 00:20:49.568 } 00:20:49.568 ]' 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.568 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.827 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.827 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.827 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.827 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.827 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.086 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:50.086 22:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.654 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.222 00:20:51.222 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.222 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.222 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.480 { 00:20:51.480 "cntlid": 47, 00:20:51.480 "qid": 0, 00:20:51.480 "state": "enabled", 00:20:51.480 "thread": "nvmf_tgt_poll_group_000", 00:20:51.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.480 "listen_address": { 00:20:51.480 "trtype": "TCP", 00:20:51.480 "adrfam": "IPv4", 00:20:51.480 "traddr": "10.0.0.2", 00:20:51.480 "trsvcid": "4420" 00:20:51.480 }, 00:20:51.480 "peer_address": { 00:20:51.480 "trtype": "TCP", 00:20:51.480 "adrfam": "IPv4", 00:20:51.480 "traddr": "10.0.0.1", 00:20:51.480 "trsvcid": "44794" 00:20:51.480 }, 00:20:51.480 "auth": { 00:20:51.480 "state": "completed", 00:20:51.480 "digest": "sha256", 00:20:51.480 "dhgroup": "ffdhe8192" 00:20:51.480 } 00:20:51.480 } 00:20:51.480 ]' 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.480 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.738 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.738 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.738 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.738 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:51.738 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:52.305 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.564 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.822 00:20:52.823 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.823 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.823 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.081 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.081 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.081 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.081 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.082 { 00:20:53.082 "cntlid": 49, 00:20:53.082 "qid": 0, 00:20:53.082 "state": "enabled", 00:20:53.082 "thread": "nvmf_tgt_poll_group_000", 00:20:53.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.082 "listen_address": { 00:20:53.082 "trtype": "TCP", 00:20:53.082 "adrfam": "IPv4", 00:20:53.082 "traddr": "10.0.0.2", 00:20:53.082 "trsvcid": "4420" 00:20:53.082 }, 00:20:53.082 "peer_address": { 00:20:53.082 "trtype": "TCP", 00:20:53.082 "adrfam": "IPv4", 00:20:53.082 "traddr": "10.0.0.1", 00:20:53.082 "trsvcid": "48558" 00:20:53.082 }, 00:20:53.082 "auth": { 00:20:53.082 "state": "completed", 00:20:53.082 "digest": "sha384", 00:20:53.082 "dhgroup": "null" 00:20:53.082 } 00:20:53.082 } 00:20:53.082 ]' 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.082 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.341 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:53.341 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.908 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.909 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.167 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.427 00:20:54.427 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.427 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.427 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.685 { 00:20:54.685 "cntlid": 51, 00:20:54.685 "qid": 0, 00:20:54.685 "state": "enabled", 00:20:54.685 "thread": "nvmf_tgt_poll_group_000", 00:20:54.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.685 "listen_address": { 00:20:54.685 "trtype": "TCP", 00:20:54.685 "adrfam": "IPv4", 00:20:54.685 "traddr": "10.0.0.2", 00:20:54.685 "trsvcid": "4420" 00:20:54.685 }, 00:20:54.685 "peer_address": { 00:20:54.685 "trtype": "TCP", 00:20:54.685 "adrfam": "IPv4", 00:20:54.685 "traddr": "10.0.0.1", 00:20:54.685 "trsvcid": "48574" 00:20:54.685 }, 00:20:54.685 "auth": { 00:20:54.685 "state": "completed", 00:20:54.685 "digest": "sha384", 00:20:54.685 "dhgroup": "null" 00:20:54.685 } 00:20:54.685 } 00:20:54.685 ]' 00:20:54.685 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.686 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.944 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:54.944 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.512 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.771 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.029 00:20:56.029 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.029 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.029 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.288 { 00:20:56.288 "cntlid": 53, 00:20:56.288 "qid": 0, 00:20:56.288 "state": "enabled", 00:20:56.288 "thread": "nvmf_tgt_poll_group_000", 00:20:56.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.288 "listen_address": { 00:20:56.288 "trtype": "TCP", 00:20:56.288 "adrfam": "IPv4", 00:20:56.288 "traddr": "10.0.0.2", 00:20:56.288 "trsvcid": "4420" 00:20:56.288 }, 00:20:56.288 "peer_address": { 00:20:56.288 "trtype": "TCP", 00:20:56.288 "adrfam": "IPv4", 00:20:56.288 "traddr": "10.0.0.1", 00:20:56.288 "trsvcid": "48602" 00:20:56.288 }, 00:20:56.288 "auth": { 00:20:56.288 "state": "completed", 00:20:56.288 "digest": "sha384", 00:20:56.288 "dhgroup": "null" 00:20:56.288 } 00:20:56.288 } 00:20:56.288 ]' 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.288 22:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.547 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:56.547 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.115 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.374 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.633 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.633 { 00:20:57.633 "cntlid": 55, 00:20:57.633 "qid": 0, 00:20:57.633 "state": "enabled", 00:20:57.633 "thread": "nvmf_tgt_poll_group_000", 00:20:57.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.633 "listen_address": { 00:20:57.633 "trtype": "TCP", 00:20:57.633 "adrfam": "IPv4", 00:20:57.633 "traddr": "10.0.0.2", 00:20:57.633 "trsvcid": "4420" 00:20:57.633 }, 00:20:57.633 "peer_address": { 00:20:57.633 "trtype": "TCP", 00:20:57.633 "adrfam": "IPv4", 00:20:57.633 "traddr": "10.0.0.1", 00:20:57.633 "trsvcid": "48628" 00:20:57.633 }, 00:20:57.633 "auth": { 00:20:57.633 "state": "completed", 00:20:57.633 "digest": "sha384", 00:20:57.633 "dhgroup": "null" 00:20:57.633 } 00:20:57.633 } 00:20:57.633 ]' 00:20:57.633 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.892 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.151 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:58.151 22:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.719 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.978 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.978 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.978 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.978 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.978 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.237 { 00:20:59.237 "cntlid": 57, 00:20:59.237 "qid": 0, 00:20:59.237 "state": "enabled", 00:20:59.237 "thread": "nvmf_tgt_poll_group_000", 00:20:59.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.237 "listen_address": { 00:20:59.237 "trtype": "TCP", 00:20:59.237 "adrfam": "IPv4", 00:20:59.237 "traddr": "10.0.0.2", 00:20:59.237 "trsvcid": "4420" 00:20:59.237 }, 00:20:59.237 "peer_address": { 00:20:59.237 "trtype": "TCP", 00:20:59.237 "adrfam": "IPv4", 00:20:59.237 "traddr": "10.0.0.1", 00:20:59.237 "trsvcid": "48666" 00:20:59.237 }, 00:20:59.237 "auth": { 00:20:59.237 "state": "completed", 00:20:59.237 "digest": "sha384", 00:20:59.237 "dhgroup": "ffdhe2048" 00:20:59.237 } 00:20:59.237 } 00:20:59.237 ]' 00:20:59.237 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.496 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.496 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.496 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.496 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.496 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.496 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.496 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.755 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:20:59.755 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.323 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.323 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.581 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.581 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.581 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.839 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.840 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.840 { 00:21:00.840 "cntlid": 59, 00:21:00.840 "qid": 0, 00:21:00.840 "state": "enabled", 00:21:00.840 "thread": "nvmf_tgt_poll_group_000", 00:21:00.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.840 "listen_address": { 00:21:00.840 "trtype": "TCP", 00:21:00.840 "adrfam": "IPv4", 00:21:00.840 "traddr": "10.0.0.2", 00:21:00.840 "trsvcid": "4420" 00:21:00.840 }, 00:21:00.840 "peer_address": { 00:21:00.840 "trtype": "TCP", 00:21:00.840 "adrfam": "IPv4", 00:21:00.840 "traddr": "10.0.0.1", 00:21:00.840 "trsvcid": "48694" 00:21:00.840 }, 00:21:00.840 "auth": { 00:21:00.840 "state": "completed", 00:21:00.840 "digest": "sha384", 00:21:00.840 "dhgroup": "ffdhe2048" 00:21:00.840 } 00:21:00.840 } 00:21:00.840 ]' 00:21:00.840 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.840 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.840 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.098 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.098 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.098 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.098 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.098 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.356 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:01.356 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.924 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.182 00:21:02.183 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.441 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.442 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.442 { 00:21:02.442 "cntlid": 61, 00:21:02.442 "qid": 0, 00:21:02.442 "state": "enabled", 00:21:02.442 "thread": "nvmf_tgt_poll_group_000", 00:21:02.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.442 "listen_address": { 00:21:02.442 "trtype": "TCP", 00:21:02.442 "adrfam": "IPv4", 00:21:02.442 "traddr": "10.0.0.2", 00:21:02.442 "trsvcid": "4420" 00:21:02.442 }, 00:21:02.442 "peer_address": { 00:21:02.442 "trtype": "TCP", 00:21:02.442 "adrfam": "IPv4", 00:21:02.442 "traddr": "10.0.0.1", 00:21:02.442 "trsvcid": "48730" 00:21:02.442 }, 00:21:02.442 "auth": { 00:21:02.442 "state": "completed", 00:21:02.442 "digest": "sha384", 00:21:02.442 "dhgroup": "ffdhe2048" 00:21:02.442 } 00:21:02.442 } 00:21:02.442 ]' 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.442 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.701 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.701 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.701 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.701 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.701 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.959 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:02.959 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:03.527 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.527 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.786 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.786 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.786 00:21:03.786 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.786 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.786 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.045 { 00:21:04.045 "cntlid": 63, 00:21:04.045 "qid": 0, 00:21:04.045 "state": "enabled", 00:21:04.045 "thread": "nvmf_tgt_poll_group_000", 00:21:04.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.045 "listen_address": { 00:21:04.045 "trtype": "TCP", 00:21:04.045 "adrfam": "IPv4", 00:21:04.045 "traddr": "10.0.0.2", 00:21:04.045 "trsvcid": "4420" 00:21:04.045 }, 00:21:04.045 "peer_address": { 00:21:04.045 "trtype": "TCP", 00:21:04.045 "adrfam": "IPv4", 00:21:04.045 "traddr": "10.0.0.1", 00:21:04.045 "trsvcid": "60814" 00:21:04.045 }, 00:21:04.045 "auth": { 00:21:04.045 "state": "completed", 00:21:04.045 "digest": "sha384", 00:21:04.045 "dhgroup": "ffdhe2048" 00:21:04.045 } 00:21:04.045 } 00:21:04.045 ]' 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.045 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.304 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.304 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.304 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.304 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.304 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.563 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:04.563 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.130 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.131 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.389 00:21:05.389 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.389 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.389 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.647 { 00:21:05.647 "cntlid": 65, 00:21:05.647 "qid": 0, 00:21:05.647 "state": "enabled", 00:21:05.647 "thread": "nvmf_tgt_poll_group_000", 00:21:05.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.647 "listen_address": { 00:21:05.647 "trtype": "TCP", 00:21:05.647 "adrfam": "IPv4", 00:21:05.647 "traddr": "10.0.0.2", 00:21:05.647 "trsvcid": "4420" 00:21:05.647 }, 00:21:05.647 "peer_address": { 00:21:05.647 "trtype": "TCP", 00:21:05.647 "adrfam": "IPv4", 00:21:05.647 "traddr": "10.0.0.1", 00:21:05.647 "trsvcid": "60862" 00:21:05.647 }, 00:21:05.647 "auth": { 00:21:05.647 "state": "completed", 00:21:05.647 "digest": "sha384", 00:21:05.647 "dhgroup": "ffdhe3072" 00:21:05.647 } 00:21:05.647 } 00:21:05.647 ]' 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.647 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:05.905 22:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.472 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.731 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.989 00:21:06.989 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.989 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.989 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.248 { 00:21:07.248 "cntlid": 67, 00:21:07.248 "qid": 0, 00:21:07.248 "state": "enabled", 00:21:07.248 "thread": "nvmf_tgt_poll_group_000", 00:21:07.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.248 "listen_address": { 00:21:07.248 "trtype": "TCP", 00:21:07.248 "adrfam": "IPv4", 00:21:07.248 "traddr": "10.0.0.2", 00:21:07.248 "trsvcid": "4420" 00:21:07.248 }, 00:21:07.248 "peer_address": { 00:21:07.248 "trtype": "TCP", 00:21:07.248 "adrfam": "IPv4", 00:21:07.248 "traddr": "10.0.0.1", 00:21:07.248 "trsvcid": "60886" 00:21:07.248 }, 00:21:07.248 "auth": { 00:21:07.248 "state": "completed", 00:21:07.248 "digest": "sha384", 00:21:07.248 "dhgroup": "ffdhe3072" 00:21:07.248 } 00:21:07.248 } 00:21:07.248 ]' 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.248 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.507 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.507 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.507 22:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.507 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:07.507 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.074 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.333 22:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.592 00:21:08.592 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.592 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.592 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.851 { 00:21:08.851 "cntlid": 69, 00:21:08.851 "qid": 0, 00:21:08.851 "state": "enabled", 00:21:08.851 "thread": "nvmf_tgt_poll_group_000", 00:21:08.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.851 "listen_address": { 00:21:08.851 "trtype": "TCP", 00:21:08.851 "adrfam": "IPv4", 00:21:08.851 "traddr": "10.0.0.2", 00:21:08.851 "trsvcid": "4420" 00:21:08.851 }, 00:21:08.851 "peer_address": { 00:21:08.851 "trtype": "TCP", 00:21:08.851 "adrfam": "IPv4", 00:21:08.851 "traddr": "10.0.0.1", 00:21:08.851 "trsvcid": "60912" 00:21:08.851 }, 00:21:08.851 "auth": { 00:21:08.851 "state": "completed", 00:21:08.851 "digest": "sha384", 00:21:08.851 "dhgroup": "ffdhe3072" 00:21:08.851 } 00:21:08.851 } 00:21:08.851 ]' 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.851 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.109 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:09.109 22:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:09.676 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.676 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.676 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.677 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.677 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.677 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.677 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.677 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.935 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.193 00:21:10.193 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.193 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.193 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.452 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.452 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.452 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.452 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.452 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.452 22:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.452 { 00:21:10.452 "cntlid": 71, 00:21:10.452 "qid": 0, 00:21:10.452 "state": "enabled", 00:21:10.452 "thread": "nvmf_tgt_poll_group_000", 00:21:10.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.452 "listen_address": { 00:21:10.452 "trtype": "TCP", 00:21:10.452 "adrfam": "IPv4", 00:21:10.452 "traddr": "10.0.0.2", 00:21:10.452 "trsvcid": "4420" 00:21:10.452 }, 00:21:10.452 "peer_address": { 00:21:10.452 "trtype": "TCP", 00:21:10.452 "adrfam": "IPv4", 00:21:10.452 "traddr": "10.0.0.1", 00:21:10.452 "trsvcid": "60938" 00:21:10.452 }, 00:21:10.452 "auth": { 00:21:10.452 "state": "completed", 00:21:10.452 "digest": "sha384", 00:21:10.452 "dhgroup": "ffdhe3072" 00:21:10.452 } 00:21:10.452 } 00:21:10.452 ]' 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.452 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.710 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:10.710 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.276 22:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.534 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.793 00:21:11.793 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.793 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.793 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.052 { 00:21:12.052 "cntlid": 73, 00:21:12.052 "qid": 0, 00:21:12.052 "state": "enabled", 00:21:12.052 "thread": "nvmf_tgt_poll_group_000", 00:21:12.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.052 "listen_address": { 00:21:12.052 "trtype": "TCP", 00:21:12.052 "adrfam": "IPv4", 00:21:12.052 "traddr": "10.0.0.2", 00:21:12.052 "trsvcid": "4420" 00:21:12.052 }, 00:21:12.052 "peer_address": { 00:21:12.052 "trtype": "TCP", 00:21:12.052 "adrfam": "IPv4", 00:21:12.052 "traddr": "10.0.0.1", 00:21:12.052 "trsvcid": "60954" 00:21:12.052 }, 00:21:12.052 "auth": { 00:21:12.052 "state": "completed", 00:21:12.052 "digest": "sha384", 00:21:12.052 "dhgroup": "ffdhe4096" 00:21:12.052 } 00:21:12.052 } 00:21:12.052 ]' 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.052 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.311 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:12.311 22:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.877 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.135 22:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.394 00:21:13.394 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.394 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.394 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.652 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.652 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.653 { 00:21:13.653 "cntlid": 75, 00:21:13.653 "qid": 0, 00:21:13.653 "state": "enabled", 00:21:13.653 "thread": "nvmf_tgt_poll_group_000", 00:21:13.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.653 "listen_address": { 00:21:13.653 "trtype": "TCP", 00:21:13.653 "adrfam": "IPv4", 00:21:13.653 "traddr": "10.0.0.2", 00:21:13.653 "trsvcid": "4420" 00:21:13.653 }, 00:21:13.653 "peer_address": { 00:21:13.653 "trtype": "TCP", 00:21:13.653 "adrfam": "IPv4", 00:21:13.653 "traddr": "10.0.0.1", 00:21:13.653 "trsvcid": "34150" 00:21:13.653 }, 00:21:13.653 "auth": { 00:21:13.653 "state": "completed", 00:21:13.653 "digest": "sha384", 00:21:13.653 "dhgroup": "ffdhe4096" 00:21:13.653 } 00:21:13.653 } 00:21:13.653 ]' 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.653 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.911 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.911 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.911 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.911 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:13.911 22:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.479 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.738 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.739 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.739 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.997 00:21:14.997 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.997 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.997 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.256 { 00:21:15.256 "cntlid": 77, 00:21:15.256 "qid": 0, 00:21:15.256 "state": "enabled", 00:21:15.256 "thread": "nvmf_tgt_poll_group_000", 00:21:15.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.256 "listen_address": { 00:21:15.256 "trtype": "TCP", 00:21:15.256 "adrfam": "IPv4", 00:21:15.256 "traddr": "10.0.0.2", 00:21:15.256 "trsvcid": "4420" 00:21:15.256 }, 00:21:15.256 "peer_address": { 00:21:15.256 "trtype": "TCP", 00:21:15.256 "adrfam": "IPv4", 00:21:15.256 "traddr": "10.0.0.1", 00:21:15.256 "trsvcid": "34180" 00:21:15.256 }, 00:21:15.256 "auth": { 00:21:15.256 "state": "completed", 00:21:15.256 "digest": "sha384", 00:21:15.256 "dhgroup": "ffdhe4096" 00:21:15.256 } 00:21:15.256 } 00:21:15.256 ]' 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.256 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.515 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.515 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.515 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.515 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.515 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:15.515 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.082 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.341 22:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.598 00:21:16.598 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.598 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.598 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.857 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.857 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.858 { 00:21:16.858 "cntlid": 79, 00:21:16.858 "qid": 0, 00:21:16.858 "state": "enabled", 00:21:16.858 "thread": "nvmf_tgt_poll_group_000", 00:21:16.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.858 "listen_address": { 00:21:16.858 "trtype": "TCP", 00:21:16.858 "adrfam": "IPv4", 00:21:16.858 "traddr": "10.0.0.2", 00:21:16.858 "trsvcid": "4420" 00:21:16.858 }, 00:21:16.858 "peer_address": { 00:21:16.858 "trtype": "TCP", 00:21:16.858 "adrfam": "IPv4", 00:21:16.858 "traddr": "10.0.0.1", 00:21:16.858 "trsvcid": "34210" 00:21:16.858 }, 00:21:16.858 "auth": { 00:21:16.858 "state": "completed", 00:21:16.858 "digest": "sha384", 00:21:16.858 "dhgroup": "ffdhe4096" 00:21:16.858 } 00:21:16.858 } 00:21:16.858 ]' 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.858 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:17.117 22:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.684 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.943 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.511 00:21:18.511 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.511 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.511 22:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.511 { 00:21:18.511 "cntlid": 81, 00:21:18.511 "qid": 0, 00:21:18.511 "state": "enabled", 00:21:18.511 "thread": "nvmf_tgt_poll_group_000", 00:21:18.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.511 "listen_address": { 00:21:18.511 "trtype": "TCP", 00:21:18.511 "adrfam": "IPv4", 00:21:18.511 "traddr": "10.0.0.2", 00:21:18.511 "trsvcid": "4420" 00:21:18.511 }, 00:21:18.511 "peer_address": { 00:21:18.511 "trtype": "TCP", 00:21:18.511 "adrfam": "IPv4", 00:21:18.511 "traddr": "10.0.0.1", 00:21:18.511 "trsvcid": "34244" 00:21:18.511 }, 00:21:18.511 "auth": { 00:21:18.511 "state": "completed", 00:21:18.511 "digest": "sha384", 00:21:18.511 "dhgroup": "ffdhe6144" 00:21:18.511 } 00:21:18.511 } 00:21:18.511 ]' 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:18.511 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.770 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.770 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.770 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.770 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:18.770 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:19.337 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.337 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.337 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.337 22:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.337 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.337 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.337 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.337 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.597 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.855 00:21:19.855 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.855 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.855 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.115 { 00:21:20.115 "cntlid": 83, 00:21:20.115 "qid": 0, 00:21:20.115 "state": "enabled", 00:21:20.115 "thread": "nvmf_tgt_poll_group_000", 00:21:20.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.115 "listen_address": { 00:21:20.115 "trtype": "TCP", 00:21:20.115 "adrfam": "IPv4", 00:21:20.115 "traddr": "10.0.0.2", 00:21:20.115 "trsvcid": "4420" 00:21:20.115 }, 00:21:20.115 "peer_address": { 00:21:20.115 "trtype": "TCP", 00:21:20.115 "adrfam": "IPv4", 00:21:20.115 "traddr": "10.0.0.1", 00:21:20.115 "trsvcid": "34274" 00:21:20.115 }, 00:21:20.115 "auth": { 00:21:20.115 "state": "completed", 00:21:20.115 "digest": "sha384", 00:21:20.115 "dhgroup": "ffdhe6144" 00:21:20.115 } 00:21:20.115 } 00:21:20.115 ]' 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.115 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.373 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.373 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.373 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.373 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.373 22:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.632 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:20.632 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.200 22:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.769 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.769 { 00:21:21.769 "cntlid": 85, 00:21:21.769 "qid": 0, 00:21:21.769 "state": "enabled", 00:21:21.769 "thread": "nvmf_tgt_poll_group_000", 00:21:21.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.769 "listen_address": { 00:21:21.769 "trtype": "TCP", 00:21:21.769 "adrfam": "IPv4", 00:21:21.769 "traddr": "10.0.0.2", 00:21:21.769 "trsvcid": "4420" 00:21:21.769 }, 00:21:21.769 "peer_address": { 00:21:21.769 "trtype": "TCP", 00:21:21.769 "adrfam": "IPv4", 00:21:21.769 "traddr": "10.0.0.1", 00:21:21.769 "trsvcid": "34294" 00:21:21.769 }, 00:21:21.769 "auth": { 00:21:21.769 "state": "completed", 00:21:21.769 "digest": "sha384", 00:21:21.769 "dhgroup": "ffdhe6144" 00:21:21.769 } 00:21:21.769 } 00:21:21.769 ]' 00:21:21.769 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.028 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:22.287 22:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:22.854 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.854 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.854 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.854 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.855 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.113 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.113 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.372 00:21:23.372 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.372 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.372 22:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.631 { 00:21:23.631 "cntlid": 87, 00:21:23.631 "qid": 0, 00:21:23.631 "state": "enabled", 00:21:23.631 "thread": "nvmf_tgt_poll_group_000", 00:21:23.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.631 "listen_address": { 00:21:23.631 "trtype": "TCP", 00:21:23.631 "adrfam": "IPv4", 00:21:23.631 "traddr": "10.0.0.2", 00:21:23.631 "trsvcid": "4420" 00:21:23.631 }, 00:21:23.631 "peer_address": { 00:21:23.631 "trtype": "TCP", 00:21:23.631 "adrfam": "IPv4", 00:21:23.631 "traddr": "10.0.0.1", 00:21:23.631 "trsvcid": "48298" 00:21:23.631 }, 00:21:23.631 "auth": { 00:21:23.631 "state": "completed", 00:21:23.631 "digest": "sha384", 00:21:23.631 "dhgroup": "ffdhe6144" 00:21:23.631 } 00:21:23.631 } 00:21:23.631 ]' 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.631 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.632 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.632 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.632 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.891 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:23.891 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.458 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.459 22:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.717 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.976 00:21:25.234 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.234 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.235 { 00:21:25.235 "cntlid": 89, 00:21:25.235 "qid": 0, 00:21:25.235 "state": "enabled", 00:21:25.235 "thread": "nvmf_tgt_poll_group_000", 00:21:25.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.235 "listen_address": { 00:21:25.235 "trtype": "TCP", 00:21:25.235 "adrfam": "IPv4", 00:21:25.235 "traddr": "10.0.0.2", 00:21:25.235 "trsvcid": "4420" 00:21:25.235 }, 00:21:25.235 "peer_address": { 00:21:25.235 "trtype": "TCP", 00:21:25.235 "adrfam": "IPv4", 00:21:25.235 "traddr": "10.0.0.1", 00:21:25.235 "trsvcid": "48324" 00:21:25.235 }, 00:21:25.235 "auth": { 00:21:25.235 "state": "completed", 00:21:25.235 "digest": "sha384", 00:21:25.235 "dhgroup": "ffdhe8192" 00:21:25.235 } 00:21:25.235 } 00:21:25.235 ]' 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.235 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.493 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.494 22:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.494 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.494 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.494 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.751 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:25.751 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:26.318 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.318 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.318 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.318 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.318 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.318 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.319 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.885 00:21:26.885 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.885 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.885 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.143 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.143 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.143 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.143 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.143 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.143 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.143 { 00:21:27.144 "cntlid": 91, 00:21:27.144 "qid": 0, 00:21:27.144 "state": "enabled", 00:21:27.144 "thread": "nvmf_tgt_poll_group_000", 00:21:27.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.144 "listen_address": { 00:21:27.144 "trtype": "TCP", 00:21:27.144 "adrfam": "IPv4", 00:21:27.144 "traddr": "10.0.0.2", 00:21:27.144 "trsvcid": "4420" 00:21:27.144 }, 00:21:27.144 "peer_address": { 00:21:27.144 "trtype": "TCP", 00:21:27.144 "adrfam": "IPv4", 00:21:27.144 "traddr": "10.0.0.1", 00:21:27.144 "trsvcid": "48350" 00:21:27.144 }, 00:21:27.144 "auth": { 00:21:27.144 "state": "completed", 00:21:27.144 "digest": "sha384", 00:21:27.144 "dhgroup": "ffdhe8192" 00:21:27.144 } 00:21:27.144 } 00:21:27.144 ]' 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.144 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.401 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:27.401 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.967 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.226 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.795 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.795 { 00:21:28.795 "cntlid": 93, 00:21:28.795 "qid": 0, 00:21:28.795 "state": "enabled", 00:21:28.795 "thread": "nvmf_tgt_poll_group_000", 00:21:28.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.795 "listen_address": { 00:21:28.795 "trtype": "TCP", 00:21:28.795 "adrfam": "IPv4", 00:21:28.795 "traddr": "10.0.0.2", 00:21:28.795 "trsvcid": "4420" 00:21:28.795 }, 00:21:28.795 "peer_address": { 00:21:28.795 "trtype": "TCP", 00:21:28.795 "adrfam": "IPv4", 00:21:28.795 "traddr": "10.0.0.1", 00:21:28.795 "trsvcid": "48384" 00:21:28.795 }, 00:21:28.795 "auth": { 00:21:28.795 "state": "completed", 00:21:28.795 "digest": "sha384", 00:21:28.795 "dhgroup": "ffdhe8192" 00:21:28.795 } 00:21:28.795 } 00:21:28.795 ]' 00:21:28.795 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.054 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.313 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:29.313 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:29.880 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.880 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.880 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.880 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.880 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.880 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.881 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.139 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.139 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.139 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.139 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.402 00:21:30.402 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.402 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.402 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.660 { 00:21:30.660 "cntlid": 95, 00:21:30.660 "qid": 0, 00:21:30.660 "state": "enabled", 00:21:30.660 "thread": "nvmf_tgt_poll_group_000", 00:21:30.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.660 "listen_address": { 00:21:30.660 "trtype": "TCP", 00:21:30.660 "adrfam": "IPv4", 00:21:30.660 "traddr": "10.0.0.2", 00:21:30.660 "trsvcid": "4420" 00:21:30.660 }, 00:21:30.660 "peer_address": { 00:21:30.660 "trtype": "TCP", 00:21:30.660 "adrfam": "IPv4", 00:21:30.660 "traddr": "10.0.0.1", 00:21:30.660 "trsvcid": "48408" 00:21:30.660 }, 00:21:30.660 "auth": { 00:21:30.660 "state": "completed", 00:21:30.660 "digest": "sha384", 00:21:30.660 "dhgroup": "ffdhe8192" 00:21:30.660 } 00:21:30.660 } 00:21:30.660 ]' 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.660 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.919 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.919 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.919 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.919 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.919 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.178 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:31.178 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.745 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.004 00:21:32.004 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.004 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.004 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.264 { 00:21:32.264 "cntlid": 97, 00:21:32.264 "qid": 0, 00:21:32.264 "state": "enabled", 00:21:32.264 "thread": "nvmf_tgt_poll_group_000", 00:21:32.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.264 "listen_address": { 00:21:32.264 "trtype": "TCP", 00:21:32.264 "adrfam": "IPv4", 00:21:32.264 "traddr": "10.0.0.2", 00:21:32.264 "trsvcid": "4420" 00:21:32.264 }, 00:21:32.264 "peer_address": { 00:21:32.264 "trtype": "TCP", 00:21:32.264 "adrfam": "IPv4", 00:21:32.264 "traddr": "10.0.0.1", 00:21:32.264 "trsvcid": "48436" 00:21:32.264 }, 00:21:32.264 "auth": { 00:21:32.264 "state": "completed", 00:21:32.264 "digest": "sha512", 00:21:32.264 "dhgroup": "null" 00:21:32.264 } 00:21:32.264 } 00:21:32.264 ]' 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.264 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.523 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:32.523 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.523 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.523 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.523 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.523 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:32.523 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.091 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.350 22:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.609 00:21:33.609 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.609 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.609 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.867 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.867 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.867 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.867 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.867 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.867 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.867 { 00:21:33.867 "cntlid": 99, 00:21:33.868 "qid": 0, 00:21:33.868 "state": "enabled", 00:21:33.868 "thread": "nvmf_tgt_poll_group_000", 00:21:33.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.868 "listen_address": { 00:21:33.868 "trtype": "TCP", 00:21:33.868 "adrfam": "IPv4", 00:21:33.868 "traddr": "10.0.0.2", 00:21:33.868 "trsvcid": "4420" 00:21:33.868 }, 00:21:33.868 "peer_address": { 00:21:33.868 "trtype": "TCP", 00:21:33.868 "adrfam": "IPv4", 00:21:33.868 "traddr": "10.0.0.1", 00:21:33.868 "trsvcid": "46150" 00:21:33.868 }, 00:21:33.868 "auth": { 00:21:33.868 "state": "completed", 00:21:33.868 "digest": "sha512", 00:21:33.868 "dhgroup": "null" 00:21:33.868 } 00:21:33.868 } 00:21:33.868 ]' 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.868 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.126 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:34.126 22:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.694 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.953 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.212 00:21:35.212 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.212 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.212 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.472 { 00:21:35.472 "cntlid": 101, 00:21:35.472 "qid": 0, 00:21:35.472 "state": "enabled", 00:21:35.472 "thread": "nvmf_tgt_poll_group_000", 00:21:35.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.472 "listen_address": { 00:21:35.472 "trtype": "TCP", 00:21:35.472 "adrfam": "IPv4", 00:21:35.472 "traddr": "10.0.0.2", 00:21:35.472 "trsvcid": "4420" 00:21:35.472 }, 00:21:35.472 "peer_address": { 00:21:35.472 "trtype": "TCP", 00:21:35.472 "adrfam": "IPv4", 00:21:35.472 "traddr": "10.0.0.1", 00:21:35.472 "trsvcid": "46164" 00:21:35.472 }, 00:21:35.472 "auth": { 00:21:35.472 "state": "completed", 00:21:35.472 "digest": "sha512", 00:21:35.472 "dhgroup": "null" 00:21:35.472 } 00:21:35.472 } 00:21:35.472 ]' 00:21:35.472 22:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.472 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.729 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:35.729 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.294 22:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.552 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.811 00:21:36.811 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.811 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.811 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.070 { 00:21:37.070 "cntlid": 103, 00:21:37.070 "qid": 0, 00:21:37.070 "state": "enabled", 00:21:37.070 "thread": "nvmf_tgt_poll_group_000", 00:21:37.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.070 "listen_address": { 00:21:37.070 "trtype": "TCP", 00:21:37.070 "adrfam": "IPv4", 00:21:37.070 "traddr": "10.0.0.2", 00:21:37.070 "trsvcid": "4420" 00:21:37.070 }, 00:21:37.070 "peer_address": { 00:21:37.070 "trtype": "TCP", 00:21:37.070 "adrfam": "IPv4", 00:21:37.070 "traddr": "10.0.0.1", 00:21:37.070 "trsvcid": "46198" 00:21:37.070 }, 00:21:37.070 "auth": { 00:21:37.070 "state": "completed", 00:21:37.070 "digest": "sha512", 00:21:37.070 "dhgroup": "null" 00:21:37.070 } 00:21:37.070 } 00:21:37.070 ]' 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.070 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.328 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:37.328 22:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.896 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.155 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.413 00:21:38.413 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.413 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.413 22:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.413 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.413 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.413 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.413 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.671 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.671 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.671 { 00:21:38.671 "cntlid": 105, 00:21:38.671 "qid": 0, 00:21:38.672 "state": "enabled", 00:21:38.672 "thread": "nvmf_tgt_poll_group_000", 00:21:38.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:38.672 "listen_address": { 00:21:38.672 "trtype": "TCP", 00:21:38.672 "adrfam": "IPv4", 00:21:38.672 "traddr": "10.0.0.2", 00:21:38.672 "trsvcid": "4420" 00:21:38.672 }, 00:21:38.672 "peer_address": { 00:21:38.672 "trtype": "TCP", 00:21:38.672 "adrfam": "IPv4", 00:21:38.672 "traddr": "10.0.0.1", 00:21:38.672 "trsvcid": "46224" 00:21:38.672 }, 00:21:38.672 "auth": { 00:21:38.672 "state": "completed", 00:21:38.672 "digest": "sha512", 00:21:38.672 "dhgroup": "ffdhe2048" 00:21:38.672 } 00:21:38.672 } 00:21:38.672 ]' 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.672 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.931 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:38.931 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:39.498 22:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.498 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:39.498 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.498 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.499 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.499 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.499 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.499 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.763 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.763 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.022 { 00:21:40.022 "cntlid": 107, 00:21:40.022 "qid": 0, 00:21:40.022 "state": "enabled", 00:21:40.022 "thread": "nvmf_tgt_poll_group_000", 00:21:40.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.022 "listen_address": { 00:21:40.022 "trtype": "TCP", 00:21:40.022 "adrfam": "IPv4", 00:21:40.022 "traddr": "10.0.0.2", 00:21:40.022 "trsvcid": "4420" 00:21:40.022 }, 00:21:40.022 "peer_address": { 00:21:40.022 "trtype": "TCP", 00:21:40.022 "adrfam": "IPv4", 00:21:40.022 "traddr": "10.0.0.1", 00:21:40.022 "trsvcid": "46240" 00:21:40.022 }, 00:21:40.022 "auth": { 00:21:40.022 "state": "completed", 00:21:40.022 "digest": "sha512", 00:21:40.022 "dhgroup": "ffdhe2048" 00:21:40.022 } 00:21:40.022 } 00:21:40.022 ]' 00:21:40.022 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.282 22:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.541 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:40.541 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.109 22:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.368 00:21:41.368 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.368 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.368 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.626 { 00:21:41.626 "cntlid": 109, 00:21:41.626 "qid": 0, 00:21:41.626 "state": "enabled", 00:21:41.626 "thread": "nvmf_tgt_poll_group_000", 00:21:41.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.626 "listen_address": { 00:21:41.626 "trtype": "TCP", 00:21:41.626 "adrfam": "IPv4", 00:21:41.626 "traddr": "10.0.0.2", 00:21:41.626 "trsvcid": "4420" 00:21:41.626 }, 00:21:41.626 "peer_address": { 00:21:41.626 "trtype": "TCP", 00:21:41.626 "adrfam": "IPv4", 00:21:41.626 "traddr": "10.0.0.1", 00:21:41.626 "trsvcid": "46280" 00:21:41.626 }, 00:21:41.626 "auth": { 00:21:41.626 "state": "completed", 00:21:41.626 "digest": "sha512", 00:21:41.626 "dhgroup": "ffdhe2048" 00:21:41.626 } 00:21:41.626 } 00:21:41.626 ]' 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.626 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.885 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.885 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.885 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.885 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:41.885 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.453 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.712 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:42.713 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.713 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.713 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.713 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.713 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.713 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.972 00:21:42.972 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.972 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.972 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.231 { 00:21:43.231 "cntlid": 111, 00:21:43.231 "qid": 0, 00:21:43.231 "state": "enabled", 00:21:43.231 "thread": "nvmf_tgt_poll_group_000", 00:21:43.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:43.231 "listen_address": { 00:21:43.231 "trtype": "TCP", 00:21:43.231 "adrfam": "IPv4", 00:21:43.231 "traddr": "10.0.0.2", 00:21:43.231 "trsvcid": "4420" 00:21:43.231 }, 00:21:43.231 "peer_address": { 00:21:43.231 "trtype": "TCP", 00:21:43.231 "adrfam": "IPv4", 00:21:43.231 "traddr": "10.0.0.1", 00:21:43.231 "trsvcid": "35602" 00:21:43.231 }, 00:21:43.231 "auth": { 00:21:43.231 "state": "completed", 00:21:43.231 "digest": "sha512", 00:21:43.231 "dhgroup": "ffdhe2048" 00:21:43.231 } 00:21:43.231 } 00:21:43.231 ]' 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.231 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.490 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.490 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.490 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.490 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:43.490 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.057 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.058 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.316 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.317 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.317 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.317 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.317 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.317 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.575 00:21:44.575 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.575 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.575 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.834 { 00:21:44.834 "cntlid": 113, 00:21:44.834 "qid": 0, 00:21:44.834 "state": "enabled", 00:21:44.834 "thread": "nvmf_tgt_poll_group_000", 00:21:44.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.834 "listen_address": { 00:21:44.834 "trtype": "TCP", 00:21:44.834 "adrfam": "IPv4", 00:21:44.834 "traddr": "10.0.0.2", 00:21:44.834 "trsvcid": "4420" 00:21:44.834 }, 00:21:44.834 "peer_address": { 00:21:44.834 "trtype": "TCP", 00:21:44.834 "adrfam": "IPv4", 00:21:44.834 "traddr": "10.0.0.1", 00:21:44.834 "trsvcid": "35614" 00:21:44.834 }, 00:21:44.834 "auth": { 00:21:44.834 "state": "completed", 00:21:44.834 "digest": "sha512", 00:21:44.834 "dhgroup": "ffdhe3072" 00:21:44.834 } 00:21:44.834 } 00:21:44.834 ]' 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.834 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.835 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.835 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.093 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.093 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.093 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.093 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:45.093 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.660 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.918 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.176 00:21:46.176 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.176 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.176 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.435 { 00:21:46.435 "cntlid": 115, 00:21:46.435 "qid": 0, 00:21:46.435 "state": "enabled", 00:21:46.435 "thread": "nvmf_tgt_poll_group_000", 00:21:46.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.435 "listen_address": { 00:21:46.435 "trtype": "TCP", 00:21:46.435 "adrfam": "IPv4", 00:21:46.435 "traddr": "10.0.0.2", 00:21:46.435 "trsvcid": "4420" 00:21:46.435 }, 00:21:46.435 "peer_address": { 00:21:46.435 "trtype": "TCP", 00:21:46.435 "adrfam": "IPv4", 00:21:46.435 "traddr": "10.0.0.1", 00:21:46.435 "trsvcid": "35634" 00:21:46.435 }, 00:21:46.435 "auth": { 00:21:46.435 "state": "completed", 00:21:46.435 "digest": "sha512", 00:21:46.435 "dhgroup": "ffdhe3072" 00:21:46.435 } 00:21:46.435 } 00:21:46.435 ]' 00:21:46.435 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.435 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.694 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:46.694 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.262 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.521 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.779 00:21:47.779 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.779 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.779 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.038 { 00:21:48.038 "cntlid": 117, 00:21:48.038 "qid": 0, 00:21:48.038 "state": "enabled", 00:21:48.038 "thread": "nvmf_tgt_poll_group_000", 00:21:48.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.038 "listen_address": { 00:21:48.038 "trtype": "TCP", 00:21:48.038 "adrfam": "IPv4", 00:21:48.038 "traddr": "10.0.0.2", 00:21:48.038 "trsvcid": "4420" 00:21:48.038 }, 00:21:48.038 "peer_address": { 00:21:48.038 "trtype": "TCP", 00:21:48.038 "adrfam": "IPv4", 00:21:48.038 "traddr": "10.0.0.1", 00:21:48.038 "trsvcid": "35660" 00:21:48.038 }, 00:21:48.038 "auth": { 00:21:48.038 "state": "completed", 00:21:48.038 "digest": "sha512", 00:21:48.038 "dhgroup": "ffdhe3072" 00:21:48.038 } 00:21:48.038 } 00:21:48.038 ]' 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.038 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.297 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:48.297 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:48.865 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.123 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:49.124 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.124 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.124 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.124 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.124 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.124 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.383 00:21:49.383 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.383 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.383 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.642 { 00:21:49.642 "cntlid": 119, 00:21:49.642 "qid": 0, 00:21:49.642 "state": "enabled", 00:21:49.642 "thread": "nvmf_tgt_poll_group_000", 00:21:49.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.642 "listen_address": { 00:21:49.642 "trtype": "TCP", 00:21:49.642 "adrfam": "IPv4", 00:21:49.642 "traddr": "10.0.0.2", 00:21:49.642 "trsvcid": "4420" 00:21:49.642 }, 00:21:49.642 "peer_address": { 00:21:49.642 "trtype": "TCP", 00:21:49.642 "adrfam": "IPv4", 00:21:49.642 "traddr": "10.0.0.1", 00:21:49.642 "trsvcid": "35674" 00:21:49.642 }, 00:21:49.642 "auth": { 00:21:49.642 "state": "completed", 00:21:49.642 "digest": "sha512", 00:21:49.642 "dhgroup": "ffdhe3072" 00:21:49.642 } 00:21:49.642 } 00:21:49.642 ]' 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.642 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.901 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:49.901 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.468 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.469 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.727 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.986 00:21:50.986 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.986 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.986 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.244 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.244 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.244 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.244 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.244 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.244 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.245 { 00:21:51.245 "cntlid": 121, 00:21:51.245 "qid": 0, 00:21:51.245 "state": "enabled", 00:21:51.245 "thread": "nvmf_tgt_poll_group_000", 00:21:51.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:51.245 "listen_address": { 00:21:51.245 "trtype": "TCP", 00:21:51.245 "adrfam": "IPv4", 00:21:51.245 "traddr": "10.0.0.2", 00:21:51.245 "trsvcid": "4420" 00:21:51.245 }, 00:21:51.245 "peer_address": { 00:21:51.245 "trtype": "TCP", 00:21:51.245 "adrfam": "IPv4", 00:21:51.245 "traddr": "10.0.0.1", 00:21:51.245 "trsvcid": "35682" 00:21:51.245 }, 00:21:51.245 "auth": { 00:21:51.245 "state": "completed", 00:21:51.245 "digest": "sha512", 00:21:51.245 "dhgroup": "ffdhe4096" 00:21:51.245 } 00:21:51.245 } 00:21:51.245 ]' 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.245 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.503 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:51.503 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:52.071 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.071 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:52.072 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.072 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.072 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.072 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.072 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.072 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.331 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.589 00:21:52.589 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.589 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.589 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.848 { 00:21:52.848 "cntlid": 123, 00:21:52.848 "qid": 0, 00:21:52.848 "state": "enabled", 00:21:52.848 "thread": "nvmf_tgt_poll_group_000", 00:21:52.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.848 "listen_address": { 00:21:52.848 "trtype": "TCP", 00:21:52.848 "adrfam": "IPv4", 00:21:52.848 "traddr": "10.0.0.2", 00:21:52.848 "trsvcid": "4420" 00:21:52.848 }, 00:21:52.848 "peer_address": { 00:21:52.848 "trtype": "TCP", 00:21:52.848 "adrfam": "IPv4", 00:21:52.848 "traddr": "10.0.0.1", 00:21:52.848 "trsvcid": "35714" 00:21:52.848 }, 00:21:52.848 "auth": { 00:21:52.848 "state": "completed", 00:21:52.848 "digest": "sha512", 00:21:52.848 "dhgroup": "ffdhe4096" 00:21:52.848 } 00:21:52.848 } 00:21:52.848 ]' 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.848 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.106 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:53.106 22:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.673 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.932 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.191 00:21:54.191 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.191 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.191 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.449 { 00:21:54.449 "cntlid": 125, 00:21:54.449 "qid": 0, 00:21:54.449 "state": "enabled", 00:21:54.449 "thread": "nvmf_tgt_poll_group_000", 00:21:54.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.449 "listen_address": { 00:21:54.449 "trtype": "TCP", 00:21:54.449 "adrfam": "IPv4", 00:21:54.449 "traddr": "10.0.0.2", 00:21:54.449 "trsvcid": "4420" 00:21:54.449 }, 00:21:54.449 "peer_address": { 00:21:54.449 "trtype": "TCP", 00:21:54.449 "adrfam": "IPv4", 00:21:54.449 "traddr": "10.0.0.1", 00:21:54.449 "trsvcid": "47812" 00:21:54.449 }, 00:21:54.449 "auth": { 00:21:54.449 "state": "completed", 00:21:54.449 "digest": "sha512", 00:21:54.449 "dhgroup": "ffdhe4096" 00:21:54.449 } 00:21:54.449 } 00:21:54.449 ]' 00:21:54.449 22:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.449 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.708 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:54.708 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.275 22:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.533 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.790 00:21:55.790 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.790 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.790 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.048 { 00:21:56.048 "cntlid": 127, 00:21:56.048 "qid": 0, 00:21:56.048 "state": "enabled", 00:21:56.048 "thread": "nvmf_tgt_poll_group_000", 00:21:56.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:56.048 "listen_address": { 00:21:56.048 "trtype": "TCP", 00:21:56.048 "adrfam": "IPv4", 00:21:56.048 "traddr": "10.0.0.2", 00:21:56.048 "trsvcid": "4420" 00:21:56.048 }, 00:21:56.048 "peer_address": { 00:21:56.048 "trtype": "TCP", 00:21:56.048 "adrfam": "IPv4", 00:21:56.048 "traddr": "10.0.0.1", 00:21:56.048 "trsvcid": "47836" 00:21:56.048 }, 00:21:56.048 "auth": { 00:21:56.048 "state": "completed", 00:21:56.048 "digest": "sha512", 00:21:56.048 "dhgroup": "ffdhe4096" 00:21:56.048 } 00:21:56.048 } 00:21:56.048 ]' 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.048 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.307 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:56.307 22:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.878 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.137 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.396 00:21:57.396 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.396 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.396 22:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.655 { 00:21:57.655 "cntlid": 129, 00:21:57.655 "qid": 0, 00:21:57.655 "state": "enabled", 00:21:57.655 "thread": "nvmf_tgt_poll_group_000", 00:21:57.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:57.655 "listen_address": { 00:21:57.655 "trtype": "TCP", 00:21:57.655 "adrfam": "IPv4", 00:21:57.655 "traddr": "10.0.0.2", 00:21:57.655 "trsvcid": "4420" 00:21:57.655 }, 00:21:57.655 "peer_address": { 00:21:57.655 "trtype": "TCP", 00:21:57.655 "adrfam": "IPv4", 00:21:57.655 "traddr": "10.0.0.1", 00:21:57.655 "trsvcid": "47856" 00:21:57.655 }, 00:21:57.655 "auth": { 00:21:57.655 "state": "completed", 00:21:57.655 "digest": "sha512", 00:21:57.655 "dhgroup": "ffdhe6144" 00:21:57.655 } 00:21:57.655 } 00:21:57.655 ]' 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.655 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.914 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:57.914 22:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.482 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.741 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.000 00:21:59.000 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.000 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.000 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.260 { 00:21:59.260 "cntlid": 131, 00:21:59.260 "qid": 0, 00:21:59.260 "state": "enabled", 00:21:59.260 "thread": "nvmf_tgt_poll_group_000", 00:21:59.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:59.260 "listen_address": { 00:21:59.260 "trtype": "TCP", 00:21:59.260 "adrfam": "IPv4", 00:21:59.260 "traddr": "10.0.0.2", 00:21:59.260 "trsvcid": "4420" 00:21:59.260 }, 00:21:59.260 "peer_address": { 00:21:59.260 "trtype": "TCP", 00:21:59.260 "adrfam": "IPv4", 00:21:59.260 "traddr": "10.0.0.1", 00:21:59.260 "trsvcid": "47882" 00:21:59.260 }, 00:21:59.260 "auth": { 00:21:59.260 "state": "completed", 00:21:59.260 "digest": "sha512", 00:21:59.260 "dhgroup": "ffdhe6144" 00:21:59.260 } 00:21:59.260 } 00:21:59.260 ]' 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.260 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.519 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.519 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.519 22:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.519 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:21:59.519 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.086 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.346 22:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.605 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.864 { 00:22:00.864 "cntlid": 133, 00:22:00.864 "qid": 0, 00:22:00.864 "state": "enabled", 00:22:00.864 "thread": "nvmf_tgt_poll_group_000", 00:22:00.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:00.864 "listen_address": { 00:22:00.864 "trtype": "TCP", 00:22:00.864 "adrfam": "IPv4", 00:22:00.864 "traddr": "10.0.0.2", 00:22:00.864 "trsvcid": "4420" 00:22:00.864 }, 00:22:00.864 "peer_address": { 00:22:00.864 "trtype": "TCP", 00:22:00.864 "adrfam": "IPv4", 00:22:00.864 "traddr": "10.0.0.1", 00:22:00.864 "trsvcid": "47920" 00:22:00.864 }, 00:22:00.864 "auth": { 00:22:00.864 "state": "completed", 00:22:00.864 "digest": "sha512", 00:22:00.864 "dhgroup": "ffdhe6144" 00:22:00.864 } 00:22:00.864 } 00:22:00.864 ]' 00:22:00.864 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.123 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.381 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:22:01.381 22:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.949 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.517 00:22:02.517 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.517 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.517 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.517 { 00:22:02.517 "cntlid": 135, 00:22:02.517 "qid": 0, 00:22:02.517 "state": "enabled", 00:22:02.517 "thread": "nvmf_tgt_poll_group_000", 00:22:02.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:02.517 "listen_address": { 00:22:02.517 "trtype": "TCP", 00:22:02.517 "adrfam": "IPv4", 00:22:02.517 "traddr": "10.0.0.2", 00:22:02.517 "trsvcid": "4420" 00:22:02.517 }, 00:22:02.517 "peer_address": { 00:22:02.517 "trtype": "TCP", 00:22:02.517 "adrfam": "IPv4", 00:22:02.517 "traddr": "10.0.0.1", 00:22:02.517 "trsvcid": "47944" 00:22:02.517 }, 00:22:02.517 "auth": { 00:22:02.517 "state": "completed", 00:22:02.517 "digest": "sha512", 00:22:02.517 "dhgroup": "ffdhe6144" 00:22:02.517 } 00:22:02.517 } 00:22:02.517 ]' 00:22:02.517 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.776 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.035 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:03.035 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.603 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.604 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.171 00:22:04.171 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.171 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.171 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.430 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.431 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.431 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.431 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.431 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.431 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.431 { 00:22:04.431 "cntlid": 137, 00:22:04.431 "qid": 0, 00:22:04.431 "state": "enabled", 00:22:04.431 "thread": "nvmf_tgt_poll_group_000", 00:22:04.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:04.431 "listen_address": { 00:22:04.431 "trtype": "TCP", 00:22:04.431 "adrfam": "IPv4", 00:22:04.431 "traddr": "10.0.0.2", 00:22:04.431 "trsvcid": "4420" 00:22:04.431 }, 00:22:04.431 "peer_address": { 00:22:04.431 "trtype": "TCP", 00:22:04.431 "adrfam": "IPv4", 00:22:04.431 "traddr": "10.0.0.1", 00:22:04.431 "trsvcid": "59048" 00:22:04.431 }, 00:22:04.431 "auth": { 00:22:04.431 "state": "completed", 00:22:04.431 "digest": "sha512", 00:22:04.431 "dhgroup": "ffdhe8192" 00:22:04.431 } 00:22:04.431 } 00:22:04.431 ]' 00:22:04.431 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.431 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.689 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:22:04.689 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.257 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.516 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.084 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.084 { 00:22:06.084 "cntlid": 139, 00:22:06.084 "qid": 0, 00:22:06.084 "state": "enabled", 00:22:06.084 "thread": "nvmf_tgt_poll_group_000", 00:22:06.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:06.084 "listen_address": { 00:22:06.084 "trtype": "TCP", 00:22:06.084 "adrfam": "IPv4", 00:22:06.084 "traddr": "10.0.0.2", 00:22:06.084 "trsvcid": "4420" 00:22:06.084 }, 00:22:06.084 "peer_address": { 00:22:06.084 "trtype": "TCP", 00:22:06.084 "adrfam": "IPv4", 00:22:06.084 "traddr": "10.0.0.1", 00:22:06.084 "trsvcid": "59076" 00:22:06.084 }, 00:22:06.084 "auth": { 00:22:06.084 "state": "completed", 00:22:06.084 "digest": "sha512", 00:22:06.084 "dhgroup": "ffdhe8192" 00:22:06.084 } 00:22:06.084 } 00:22:06.084 ]' 00:22:06.084 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.343 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.602 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:22:06.602 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: --dhchap-ctrl-secret DHHC-1:02:ZWEyYmJjYjM5YWE4NjRkZDc0MTJjM2JmM2E1ZjZiODNhMDc1YzY2MTU3MzQ1MTM3XtmWFA==: 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.169 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.737 00:22:07.737 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.737 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.737 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.996 { 00:22:07.996 "cntlid": 141, 00:22:07.996 "qid": 0, 00:22:07.996 "state": "enabled", 00:22:07.996 "thread": "nvmf_tgt_poll_group_000", 00:22:07.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:07.996 "listen_address": { 00:22:07.996 "trtype": "TCP", 00:22:07.996 "adrfam": "IPv4", 00:22:07.996 "traddr": "10.0.0.2", 00:22:07.996 "trsvcid": "4420" 00:22:07.996 }, 00:22:07.996 "peer_address": { 00:22:07.996 "trtype": "TCP", 00:22:07.996 "adrfam": "IPv4", 00:22:07.996 "traddr": "10.0.0.1", 00:22:07.996 "trsvcid": "59108" 00:22:07.996 }, 00:22:07.996 "auth": { 00:22:07.996 "state": "completed", 00:22:07.996 "digest": "sha512", 00:22:07.996 "dhgroup": "ffdhe8192" 00:22:07.996 } 00:22:07.996 } 00:22:07.996 ]' 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.996 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.255 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:22:08.255 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:01:ZmNkZGQ4ODhmYWNiN2U3ZTJjNjJjNWUzMzg2ZjZhN2STF5NE: 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.823 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.081 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.082 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.082 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.082 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.082 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.649 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.649 { 00:22:09.649 "cntlid": 143, 00:22:09.649 "qid": 0, 00:22:09.649 "state": "enabled", 00:22:09.649 "thread": "nvmf_tgt_poll_group_000", 00:22:09.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:09.649 "listen_address": { 00:22:09.649 "trtype": "TCP", 00:22:09.649 "adrfam": "IPv4", 00:22:09.649 "traddr": "10.0.0.2", 00:22:09.649 "trsvcid": "4420" 00:22:09.649 }, 00:22:09.649 "peer_address": { 00:22:09.649 "trtype": "TCP", 00:22:09.649 "adrfam": "IPv4", 00:22:09.649 "traddr": "10.0.0.1", 00:22:09.649 "trsvcid": "59144" 00:22:09.649 }, 00:22:09.649 "auth": { 00:22:09.649 "state": "completed", 00:22:09.649 "digest": "sha512", 00:22:09.649 "dhgroup": "ffdhe8192" 00:22:09.649 } 00:22:09.649 } 00:22:09.649 ]' 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.649 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.908 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.908 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.908 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.908 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.908 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.167 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:10.167 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.735 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.303 00:22:11.303 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.303 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.303 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.562 { 00:22:11.562 "cntlid": 145, 00:22:11.562 "qid": 0, 00:22:11.562 "state": "enabled", 00:22:11.562 "thread": "nvmf_tgt_poll_group_000", 00:22:11.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:11.562 "listen_address": { 00:22:11.562 "trtype": "TCP", 00:22:11.562 "adrfam": "IPv4", 00:22:11.562 "traddr": "10.0.0.2", 00:22:11.562 "trsvcid": "4420" 00:22:11.562 }, 00:22:11.562 "peer_address": { 00:22:11.562 "trtype": "TCP", 00:22:11.562 "adrfam": "IPv4", 00:22:11.562 "traddr": "10.0.0.1", 00:22:11.562 "trsvcid": "59168" 00:22:11.562 }, 00:22:11.562 "auth": { 00:22:11.562 "state": "completed", 00:22:11.562 "digest": "sha512", 00:22:11.562 "dhgroup": "ffdhe8192" 00:22:11.562 } 00:22:11.562 } 00:22:11.562 ]' 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.562 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.821 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:22:11.821 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODgwZThkZjk4NGQ0MTcwZjVmM2M5OGQzMTM3NmVmNzdmMTA0NGQ3MGIwY2FlNDYzXf1NDA==: --dhchap-ctrl-secret DHHC-1:03:ZDJiMGRkMmY3ZWVhZTE5MjFhOGFkMDJhMDJmYjY3MzY0NzhlZGVhOGIxYWFlYmE1MGFmZThhZWYxMDQzZDUzOCVIIhU=: 00:22:12.388 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:12.388 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:12.956 request: 00:22:12.956 { 00:22:12.956 "name": "nvme0", 00:22:12.956 "trtype": "tcp", 00:22:12.956 "traddr": "10.0.0.2", 00:22:12.956 "adrfam": "ipv4", 00:22:12.956 "trsvcid": "4420", 00:22:12.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:12.956 "prchk_reftag": false, 00:22:12.956 "prchk_guard": false, 00:22:12.956 "hdgst": false, 00:22:12.956 "ddgst": false, 00:22:12.956 "dhchap_key": "key2", 00:22:12.956 "allow_unrecognized_csi": false, 00:22:12.956 "method": "bdev_nvme_attach_controller", 00:22:12.956 "req_id": 1 00:22:12.956 } 00:22:12.956 Got JSON-RPC error response 00:22:12.956 response: 00:22:12.956 { 00:22:12.956 "code": -5, 00:22:12.956 "message": "Input/output error" 00:22:12.956 } 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.956 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.215 request: 00:22:13.215 { 00:22:13.215 "name": "nvme0", 00:22:13.215 "trtype": "tcp", 00:22:13.215 "traddr": "10.0.0.2", 00:22:13.215 "adrfam": "ipv4", 00:22:13.215 "trsvcid": "4420", 00:22:13.215 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:13.215 "prchk_reftag": false, 00:22:13.215 "prchk_guard": false, 00:22:13.215 "hdgst": false, 00:22:13.215 "ddgst": false, 00:22:13.215 "dhchap_key": "key1", 00:22:13.215 "dhchap_ctrlr_key": "ckey2", 00:22:13.215 "allow_unrecognized_csi": false, 00:22:13.215 "method": "bdev_nvme_attach_controller", 00:22:13.215 "req_id": 1 00:22:13.215 } 00:22:13.215 Got JSON-RPC error response 00:22:13.215 response: 00:22:13.215 { 00:22:13.215 "code": -5, 00:22:13.215 "message": "Input/output error" 00:22:13.215 } 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.474 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.733 request: 00:22:13.733 { 00:22:13.733 "name": "nvme0", 00:22:13.733 "trtype": "tcp", 00:22:13.733 "traddr": "10.0.0.2", 00:22:13.733 "adrfam": "ipv4", 00:22:13.733 "trsvcid": "4420", 00:22:13.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:13.733 "prchk_reftag": false, 00:22:13.733 "prchk_guard": false, 00:22:13.733 "hdgst": false, 00:22:13.733 "ddgst": false, 00:22:13.733 "dhchap_key": "key1", 00:22:13.733 "dhchap_ctrlr_key": "ckey1", 00:22:13.733 "allow_unrecognized_csi": false, 00:22:13.733 "method": "bdev_nvme_attach_controller", 00:22:13.733 "req_id": 1 00:22:13.733 } 00:22:13.733 Got JSON-RPC error response 00:22:13.733 response: 00:22:13.733 { 00:22:13.733 "code": -5, 00:22:13.733 "message": "Input/output error" 00:22:13.733 } 00:22:13.733 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.733 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.733 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.733 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.733 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 314381 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314381 ']' 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314381 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.734 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314381 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314381' 00:22:13.993 killing process with pid 314381 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314381 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314381 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=336500 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 336500 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336500 ']' 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.993 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 336500 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336500 ']' 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.252 22:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.511 null0 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.511 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FNk 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.uxB ]] 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uxB 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.512 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AlP 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.2qy ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2qy 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZfZ 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.TPw ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TPw 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Aqg 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.771 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.339 nvme0n1 00:22:15.339 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.339 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.339 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.599 { 00:22:15.599 "cntlid": 1, 00:22:15.599 "qid": 0, 00:22:15.599 "state": "enabled", 00:22:15.599 "thread": "nvmf_tgt_poll_group_000", 00:22:15.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:15.599 "listen_address": { 00:22:15.599 "trtype": "TCP", 00:22:15.599 "adrfam": "IPv4", 00:22:15.599 "traddr": "10.0.0.2", 00:22:15.599 "trsvcid": "4420" 00:22:15.599 }, 00:22:15.599 "peer_address": { 00:22:15.599 "trtype": "TCP", 00:22:15.599 "adrfam": "IPv4", 00:22:15.599 "traddr": "10.0.0.1", 00:22:15.599 "trsvcid": "58080" 00:22:15.599 }, 00:22:15.599 "auth": { 00:22:15.599 "state": "completed", 00:22:15.599 "digest": "sha512", 00:22:15.599 "dhgroup": "ffdhe8192" 00:22:15.599 } 00:22:15.599 } 00:22:15.599 ]' 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.599 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:15.857 22:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:16.426 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.685 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.945 request: 00:22:16.945 { 00:22:16.945 "name": "nvme0", 00:22:16.945 "trtype": "tcp", 00:22:16.945 "traddr": "10.0.0.2", 00:22:16.945 "adrfam": "ipv4", 00:22:16.945 "trsvcid": "4420", 00:22:16.945 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:16.945 "prchk_reftag": false, 00:22:16.945 "prchk_guard": false, 00:22:16.945 "hdgst": false, 00:22:16.945 "ddgst": false, 00:22:16.945 "dhchap_key": "key3", 00:22:16.945 "allow_unrecognized_csi": false, 00:22:16.945 "method": "bdev_nvme_attach_controller", 00:22:16.945 "req_id": 1 00:22:16.945 } 00:22:16.945 Got JSON-RPC error response 00:22:16.945 response: 00:22:16.945 { 00:22:16.945 "code": -5, 00:22:16.945 "message": "Input/output error" 00:22:16.945 } 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:16.945 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.204 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.463 request: 00:22:17.463 { 00:22:17.463 "name": "nvme0", 00:22:17.463 "trtype": "tcp", 00:22:17.463 "traddr": "10.0.0.2", 00:22:17.463 "adrfam": "ipv4", 00:22:17.463 "trsvcid": "4420", 00:22:17.463 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:17.463 "prchk_reftag": false, 00:22:17.463 "prchk_guard": false, 00:22:17.463 "hdgst": false, 00:22:17.463 "ddgst": false, 00:22:17.463 "dhchap_key": "key3", 00:22:17.463 "allow_unrecognized_csi": false, 00:22:17.463 "method": "bdev_nvme_attach_controller", 00:22:17.463 "req_id": 1 00:22:17.463 } 00:22:17.463 Got JSON-RPC error response 00:22:17.463 response: 00:22:17.463 { 00:22:17.463 "code": -5, 00:22:17.463 "message": "Input/output error" 00:22:17.463 } 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.463 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:17.464 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.032 request: 00:22:18.032 { 00:22:18.032 "name": "nvme0", 00:22:18.032 "trtype": "tcp", 00:22:18.032 "traddr": "10.0.0.2", 00:22:18.032 "adrfam": "ipv4", 00:22:18.032 "trsvcid": "4420", 00:22:18.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:18.032 "prchk_reftag": false, 00:22:18.032 "prchk_guard": false, 00:22:18.032 "hdgst": false, 00:22:18.032 "ddgst": false, 00:22:18.032 "dhchap_key": "key0", 00:22:18.032 "dhchap_ctrlr_key": "key1", 00:22:18.032 "allow_unrecognized_csi": false, 00:22:18.032 "method": "bdev_nvme_attach_controller", 00:22:18.032 "req_id": 1 00:22:18.032 } 00:22:18.032 Got JSON-RPC error response 00:22:18.032 response: 00:22:18.032 { 00:22:18.032 "code": -5, 00:22:18.032 "message": "Input/output error" 00:22:18.032 } 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:18.032 nvme0n1 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:18.032 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.291 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.291 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.291 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.550 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:19.487 nvme0n1 00:22:19.487 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:19.487 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:19.487 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:19.487 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.746 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.746 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:19.746 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: --dhchap-ctrl-secret DHHC-1:03:NzY3NDE3NWQzOTg4ZDM5NTkzMmY3NTc4MmYwN2U2ZWEzOGRjNjQ2MzM4ZjNkOTAxNTVhOTU0OTRkZWU0ZmQ4NPulM2I=: 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.312 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.571 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.830 request: 00:22:20.830 { 00:22:20.830 "name": "nvme0", 00:22:20.830 "trtype": "tcp", 00:22:20.830 "traddr": "10.0.0.2", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "4420", 00:22:20.830 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:20.830 "prchk_reftag": false, 00:22:20.830 "prchk_guard": false, 00:22:20.830 "hdgst": false, 00:22:20.830 "ddgst": false, 00:22:20.830 "dhchap_key": "key1", 00:22:20.830 "allow_unrecognized_csi": false, 00:22:20.830 "method": "bdev_nvme_attach_controller", 00:22:20.830 "req_id": 1 00:22:20.830 } 00:22:20.830 Got JSON-RPC error response 00:22:20.830 response: 00:22:20.830 { 00:22:20.830 "code": -5, 00:22:20.830 "message": "Input/output error" 00:22:20.830 } 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.830 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.765 nvme0n1 00:22:21.765 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:21.765 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:21.765 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.765 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.765 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.765 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:22.023 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:22.282 nvme0n1 00:22:22.282 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:22.282 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:22.282 22:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.540 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.540 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.540 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: '' 2s 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: ]] 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDZkYmYyNDQxNmJjMjk0MzMxMDUxYzI1MTA4MTRjOTHbQTYc: 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:22.799 22:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: 2s 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: ]] 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmI1Nzc2ZDk5MDZlNjk3NGU1NTkwNWQ4MTcwZTE5MmRiOWM2NDE2YmU3ODY5N2E36qfrRg==: 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:24.744 22:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:26.736 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:26.736 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.736 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.736 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.029 22:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:27.668 nvme0n1 00:22:27.668 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.668 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.668 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.668 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.668 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.668 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:28.302 22:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.657 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.250 request: 00:22:29.250 { 00:22:29.250 "name": "nvme0", 00:22:29.250 "dhchap_key": "key1", 00:22:29.250 "dhchap_ctrlr_key": "key3", 00:22:29.250 "method": "bdev_nvme_set_keys", 00:22:29.250 "req_id": 1 00:22:29.250 } 00:22:29.250 Got JSON-RPC error response 00:22:29.250 response: 00:22:29.250 { 00:22:29.250 "code": -13, 00:22:29.250 "message": "Permission denied" 00:22:29.250 } 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:29.250 22:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.509 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:29.509 22:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:30.443 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:30.443 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:30.443 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.702 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:31.269 nvme0n1 00:22:31.269 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.269 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.269 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.527 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:31.786 request: 00:22:31.786 { 00:22:31.786 "name": "nvme0", 00:22:31.786 "dhchap_key": "key2", 00:22:31.786 "dhchap_ctrlr_key": "key0", 00:22:31.786 "method": "bdev_nvme_set_keys", 00:22:31.786 "req_id": 1 00:22:31.786 } 00:22:31.786 Got JSON-RPC error response 00:22:31.786 response: 00:22:31.786 { 00:22:31.786 "code": -13, 00:22:31.786 "message": "Permission denied" 00:22:31.786 } 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:31.786 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.044 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:32.044 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:32.979 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:32.979 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:32.979 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 314403 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314403 ']' 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314403 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314403 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314403' 00:22:33.238 killing process with pid 314403 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314403 00:22:33.238 22:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314403 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.806 rmmod nvme_tcp 00:22:33.806 rmmod nvme_fabrics 00:22:33.806 rmmod nvme_keyring 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 336500 ']' 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 336500 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 336500 ']' 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 336500 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336500 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336500' 00:22:33.806 killing process with pid 336500 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 336500 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 336500 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.806 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.065 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FNk /tmp/spdk.key-sha256.AlP /tmp/spdk.key-sha384.ZfZ /tmp/spdk.key-sha512.Aqg /tmp/spdk.key-sha512.uxB /tmp/spdk.key-sha384.2qy /tmp/spdk.key-sha256.TPw '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:35.970 00:22:35.970 real 2m34.265s 00:22:35.970 user 5m54.778s 00:22:35.970 sys 0m24.295s 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.970 ************************************ 00:22:35.970 END TEST nvmf_auth_target 00:22:35.970 ************************************ 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.970 ************************************ 00:22:35.970 START TEST nvmf_bdevio_no_huge 00:22:35.970 ************************************ 00:22:35.970 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:36.229 * Looking for test storage... 00:22:36.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.229 --rc genhtml_branch_coverage=1 00:22:36.229 --rc genhtml_function_coverage=1 00:22:36.229 --rc genhtml_legend=1 00:22:36.229 --rc geninfo_all_blocks=1 00:22:36.229 --rc geninfo_unexecuted_blocks=1 00:22:36.229 00:22:36.229 ' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.229 --rc genhtml_branch_coverage=1 00:22:36.229 --rc genhtml_function_coverage=1 00:22:36.229 --rc genhtml_legend=1 00:22:36.229 --rc geninfo_all_blocks=1 00:22:36.229 --rc geninfo_unexecuted_blocks=1 00:22:36.229 00:22:36.229 ' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.229 --rc genhtml_branch_coverage=1 00:22:36.229 --rc genhtml_function_coverage=1 00:22:36.229 --rc genhtml_legend=1 00:22:36.229 --rc geninfo_all_blocks=1 00:22:36.229 --rc geninfo_unexecuted_blocks=1 00:22:36.229 00:22:36.229 ' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:36.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.229 --rc genhtml_branch_coverage=1 00:22:36.229 --rc genhtml_function_coverage=1 00:22:36.229 --rc genhtml_legend=1 00:22:36.229 --rc geninfo_all_blocks=1 00:22:36.229 --rc geninfo_unexecuted_blocks=1 00:22:36.229 00:22:36.229 ' 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.229 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:36.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.230 22:28:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:42.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:42.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:42.804 Found net devices under 0000:af:00.0: cvl_0_0 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:42.804 Found net devices under 0000:af:00.1: cvl_0_1 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.804 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:22:42.805 00:22:42.805 --- 10.0.0.2 ping statistics --- 00:22:42.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.805 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:22:42.805 00:22:42.805 --- 10.0.0.1 ping statistics --- 00:22:42.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.805 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=343253 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 343253 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 343253 ']' 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 [2024-12-16 22:28:31.763111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:42.805 [2024-12-16 22:28:31.763154] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:42.805 [2024-12-16 22:28:31.841736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.805 [2024-12-16 22:28:31.877330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.805 [2024-12-16 22:28:31.877363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.805 [2024-12-16 22:28:31.877371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.805 [2024-12-16 22:28:31.877378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.805 [2024-12-16 22:28:31.877383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.805 [2024-12-16 22:28:31.878438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.805 [2024-12-16 22:28:31.878528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:42.805 [2024-12-16 22:28:31.878634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.805 [2024-12-16 22:28:31.878636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.805 22:28:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 [2024-12-16 22:28:32.030734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 Malloc0 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:42.805 [2024-12-16 22:28:32.079055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.805 { 00:22:42.805 "params": { 00:22:42.805 "name": "Nvme$subsystem", 00:22:42.805 "trtype": "$TEST_TRANSPORT", 00:22:42.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.805 "adrfam": "ipv4", 00:22:42.805 "trsvcid": "$NVMF_PORT", 00:22:42.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.805 "hdgst": ${hdgst:-false}, 00:22:42.805 "ddgst": ${ddgst:-false} 00:22:42.805 }, 00:22:42.805 "method": "bdev_nvme_attach_controller" 00:22:42.805 } 00:22:42.805 EOF 00:22:42.805 )") 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:42.805 22:28:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:42.805 "params": { 00:22:42.805 "name": "Nvme1", 00:22:42.805 "trtype": "tcp", 00:22:42.805 "traddr": "10.0.0.2", 00:22:42.805 "adrfam": "ipv4", 00:22:42.805 "trsvcid": "4420", 00:22:42.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.805 "hdgst": false, 00:22:42.805 "ddgst": false 00:22:42.805 }, 00:22:42.805 "method": "bdev_nvme_attach_controller" 00:22:42.805 }' 00:22:42.805 [2024-12-16 22:28:32.130840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:42.805 [2024-12-16 22:28:32.130882] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid343279 ] 00:22:42.805 [2024-12-16 22:28:32.204745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:42.805 [2024-12-16 22:28:32.242139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.805 [2024-12-16 22:28:32.242227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.805 [2024-12-16 22:28:32.242227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.063 I/O targets: 00:22:43.063 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:43.063 00:22:43.063 00:22:43.063 CUnit - A unit testing framework for C - Version 2.1-3 00:22:43.063 http://cunit.sourceforge.net/ 00:22:43.063 00:22:43.063 00:22:43.063 Suite: bdevio tests on: Nvme1n1 00:22:43.063 Test: blockdev write read block ...passed 00:22:43.063 Test: blockdev write zeroes read block ...passed 00:22:43.063 Test: blockdev write zeroes read no split ...passed 00:22:43.063 Test: blockdev write zeroes read split ...passed 00:22:43.063 Test: blockdev write zeroes read split partial ...passed 00:22:43.063 Test: blockdev reset ...[2024-12-16 22:28:32.687570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:43.063 [2024-12-16 22:28:32.687631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a4ea0 (9): Bad file descriptor 00:22:43.321 [2024-12-16 22:28:32.783396] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:43.321 passed 00:22:43.321 Test: blockdev write read 8 blocks ...passed 00:22:43.321 Test: blockdev write read size > 128k ...passed 00:22:43.321 Test: blockdev write read invalid size ...passed 00:22:43.321 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:43.321 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:43.321 Test: blockdev write read max offset ...passed 00:22:43.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:43.321 Test: blockdev writev readv 8 blocks ...passed 00:22:43.321 Test: blockdev writev readv 30 x 1block ...passed 00:22:43.321 Test: blockdev writev readv block ...passed 00:22:43.321 Test: blockdev writev readv size > 128k ...passed 00:22:43.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:43.321 Test: blockdev comparev and writev ...[2024-12-16 22:28:32.993996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:43.321 [2024-12-16 22:28:32.994816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:43.321 [2024-12-16 22:28:32.994823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:43.579 passed 00:22:43.579 Test: blockdev nvme passthru rw ...passed 00:22:43.579 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:28:33.076476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.579 [2024-12-16 22:28:33.076492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:43.579 [2024-12-16 22:28:33.076598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.579 [2024-12-16 22:28:33.076609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.579 [2024-12-16 22:28:33.076712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.579 [2024-12-16 22:28:33.076721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:43.579 [2024-12-16 22:28:33.076821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:43.579 [2024-12-16 22:28:33.076830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:43.579 passed 00:22:43.579 Test: blockdev nvme admin passthru ...passed 00:22:43.579 Test: blockdev copy ...passed 00:22:43.579 00:22:43.579 Run Summary: Type Total Ran Passed Failed Inactive 00:22:43.579 suites 1 1 n/a 0 0 00:22:43.579 tests 23 23 23 0 0 00:22:43.579 asserts 152 152 152 0 n/a 00:22:43.579 00:22:43.579 Elapsed time = 1.140 seconds 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.837 rmmod nvme_tcp 00:22:43.837 rmmod nvme_fabrics 00:22:43.837 rmmod nvme_keyring 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 343253 ']' 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 343253 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 343253 ']' 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 343253 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343253 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343253' 00:22:43.837 killing process with pid 343253 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 343253 00:22:43.837 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 343253 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.096 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.355 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.355 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.355 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.355 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.355 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.260 00:22:46.260 real 0m10.212s 00:22:46.260 user 0m11.580s 00:22:46.260 sys 0m5.224s 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:46.260 ************************************ 00:22:46.260 END TEST nvmf_bdevio_no_huge 00:22:46.260 ************************************ 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:46.260 ************************************ 00:22:46.260 START TEST nvmf_tls 00:22:46.260 ************************************ 00:22:46.260 22:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:46.520 * Looking for test storage... 00:22:46.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:46.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.520 --rc genhtml_branch_coverage=1 00:22:46.520 --rc genhtml_function_coverage=1 00:22:46.520 --rc genhtml_legend=1 00:22:46.520 --rc geninfo_all_blocks=1 00:22:46.520 --rc geninfo_unexecuted_blocks=1 00:22:46.520 00:22:46.520 ' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:46.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.520 --rc genhtml_branch_coverage=1 00:22:46.520 --rc genhtml_function_coverage=1 00:22:46.520 --rc genhtml_legend=1 00:22:46.520 --rc geninfo_all_blocks=1 00:22:46.520 --rc geninfo_unexecuted_blocks=1 00:22:46.520 00:22:46.520 ' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:46.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.520 --rc genhtml_branch_coverage=1 00:22:46.520 --rc genhtml_function_coverage=1 00:22:46.520 --rc genhtml_legend=1 00:22:46.520 --rc geninfo_all_blocks=1 00:22:46.520 --rc geninfo_unexecuted_blocks=1 00:22:46.520 00:22:46.520 ' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:46.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.520 --rc genhtml_branch_coverage=1 00:22:46.520 --rc genhtml_function_coverage=1 00:22:46.520 --rc genhtml_legend=1 00:22:46.520 --rc geninfo_all_blocks=1 00:22:46.520 --rc geninfo_unexecuted_blocks=1 00:22:46.520 00:22:46.520 ' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.520 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.521 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.090 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:53.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:53.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:53.091 Found net devices under 0000:af:00.0: cvl_0_0 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:53.091 Found net devices under 0000:af:00.1: cvl_0_1 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.091 22:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:22:53.091 00:22:53.091 --- 10.0.0.2 ping statistics --- 00:22:53.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.091 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:22:53.091 00:22:53.091 --- 10.0.0.1 ping statistics --- 00:22:53.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.091 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=346978 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 346978 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 346978 ']' 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.091 [2024-12-16 22:28:42.119559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:53.091 [2024-12-16 22:28:42.119602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.091 [2024-12-16 22:28:42.198361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.091 [2024-12-16 22:28:42.218835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.091 [2024-12-16 22:28:42.218868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.091 [2024-12-16 22:28:42.218875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.091 [2024-12-16 22:28:42.218881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.091 [2024-12-16 22:28:42.218886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.091 [2024-12-16 22:28:42.219384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.091 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:53.092 true 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:53.092 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:53.350 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.350 22:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:53.609 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:53.609 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:53.609 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:53.609 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:53.609 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:53.867 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:53.867 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:53.867 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:53.867 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.126 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:54.126 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:54.126 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:54.384 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.384 22:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:54.384 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:54.384 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:54.384 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:54.642 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:54.642 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.A5Q3i5TvYu 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6UpFnjJv18 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.A5Q3i5TvYu 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6UpFnjJv18 00:22:54.901 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:55.160 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:55.419 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.A5Q3i5TvYu 00:22:55.419 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A5Q3i5TvYu 00:22:55.419 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:55.419 [2024-12-16 22:28:45.095902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.419 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:55.676 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.934 [2024-12-16 22:28:45.452794] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.934 [2024-12-16 22:28:45.453020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.934 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:56.192 malloc0 00:22:56.192 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:56.192 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A5Q3i5TvYu 00:22:56.451 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:56.710 22:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.A5Q3i5TvYu 00:23:06.688 Initializing NVMe Controllers 00:23:06.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.688 Initialization complete. Launching workers. 00:23:06.688 ======================================================== 00:23:06.688 Latency(us) 00:23:06.688 Device Information : IOPS MiB/s Average min max 00:23:06.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16905.38 66.04 3785.83 944.23 6143.28 00:23:06.688 ======================================================== 00:23:06.688 Total : 16905.38 66.04 3785.83 944.23 6143.28 00:23:06.688 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5Q3i5TvYu 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A5Q3i5TvYu 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=349270 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 349270 /var/tmp/bdevperf.sock 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349270 ']' 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.688 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.688 [2024-12-16 22:28:56.379259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:06.688 [2024-12-16 22:28:56.379305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349270 ] 00:23:06.947 [2024-12-16 22:28:56.453203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.947 [2024-12-16 22:28:56.475785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.947 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.947 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.947 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5Q3i5TvYu 00:23:07.206 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.206 [2024-12-16 22:28:56.902860] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.465 TLSTESTn1 00:23:07.465 22:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.465 Running I/O for 10 seconds... 00:23:09.777 5186.00 IOPS, 20.26 MiB/s [2024-12-16T21:29:00.414Z] 5284.50 IOPS, 20.64 MiB/s [2024-12-16T21:29:01.350Z] 5220.67 IOPS, 20.39 MiB/s [2024-12-16T21:29:02.286Z] 5104.75 IOPS, 19.94 MiB/s [2024-12-16T21:29:03.221Z] 5126.20 IOPS, 20.02 MiB/s [2024-12-16T21:29:04.157Z] 5211.33 IOPS, 20.36 MiB/s [2024-12-16T21:29:05.533Z] 5233.29 IOPS, 20.44 MiB/s [2024-12-16T21:29:06.100Z] 5269.50 IOPS, 20.58 MiB/s [2024-12-16T21:29:07.488Z] 5271.11 IOPS, 20.59 MiB/s [2024-12-16T21:29:07.488Z] 5290.40 IOPS, 20.67 MiB/s 00:23:17.787 Latency(us) 00:23:17.787 [2024-12-16T21:29:07.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.787 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.787 Verification LBA range: start 0x0 length 0x2000 00:23:17.787 TLSTESTn1 : 10.01 5296.21 20.69 0.00 0.00 24134.24 5118.05 32206.26 00:23:17.787 [2024-12-16T21:29:07.488Z] =================================================================================================================== 00:23:17.787 [2024-12-16T21:29:07.488Z] Total : 5296.21 20.69 0.00 0.00 24134.24 5118.05 32206.26 00:23:17.787 { 00:23:17.787 "results": [ 00:23:17.787 { 00:23:17.787 "job": "TLSTESTn1", 00:23:17.787 "core_mask": "0x4", 00:23:17.787 "workload": "verify", 00:23:17.787 "status": "finished", 00:23:17.787 "verify_range": { 00:23:17.787 "start": 0, 00:23:17.787 "length": 8192 00:23:17.787 }, 00:23:17.787 "queue_depth": 128, 00:23:17.787 "io_size": 4096, 00:23:17.787 "runtime": 10.013002, 00:23:17.787 "iops": 5296.213862735671, 00:23:17.787 "mibps": 20.688335401311214, 00:23:17.787 "io_failed": 0, 00:23:17.787 "io_timeout": 0, 00:23:17.787 "avg_latency_us": 24134.237206306105, 00:23:17.787 "min_latency_us": 5118.049523809524, 00:23:17.787 "max_latency_us": 32206.262857142858 00:23:17.787 } 00:23:17.787 ], 00:23:17.787 "core_count": 1 00:23:17.787 } 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 349270 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349270 ']' 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349270 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349270 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349270' 00:23:17.787 killing process with pid 349270 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349270 00:23:17.787 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.787 00:23:17.787 Latency(us) 00:23:17.787 [2024-12-16T21:29:07.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.787 [2024-12-16T21:29:07.488Z] =================================================================================================================== 00:23:17.787 [2024-12-16T21:29:07.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349270 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UpFnjJv18 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UpFnjJv18 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6UpFnjJv18 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6UpFnjJv18 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351052 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351052 /var/tmp/bdevperf.sock 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351052 ']' 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.787 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.787 [2024-12-16 22:29:07.397282] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:17.787 [2024-12-16 22:29:07.397328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351052 ] 00:23:17.787 [2024-12-16 22:29:07.467083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.787 [2024-12-16 22:29:07.486579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.046 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.046 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.046 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6UpFnjJv18 00:23:18.304 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.304 [2024-12-16 22:29:07.942072] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.304 [2024-12-16 22:29:07.947907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:18.304 [2024-12-16 22:29:07.948572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe82340 (107): Transport endpoint is not connected 00:23:18.304 [2024-12-16 22:29:07.949565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe82340 (9): Bad file descriptor 00:23:18.304 [2024-12-16 22:29:07.950566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:18.304 [2024-12-16 22:29:07.950577] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:18.304 [2024-12-16 22:29:07.950585] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:18.304 [2024-12-16 22:29:07.950593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:18.304 request: 00:23:18.304 { 00:23:18.305 "name": "TLSTEST", 00:23:18.305 "trtype": "tcp", 00:23:18.305 "traddr": "10.0.0.2", 00:23:18.305 "adrfam": "ipv4", 00:23:18.305 "trsvcid": "4420", 00:23:18.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.305 "prchk_reftag": false, 00:23:18.305 "prchk_guard": false, 00:23:18.305 "hdgst": false, 00:23:18.305 "ddgst": false, 00:23:18.305 "psk": "key0", 00:23:18.305 "allow_unrecognized_csi": false, 00:23:18.305 "method": "bdev_nvme_attach_controller", 00:23:18.305 "req_id": 1 00:23:18.305 } 00:23:18.305 Got JSON-RPC error response 00:23:18.305 response: 00:23:18.305 { 00:23:18.305 "code": -5, 00:23:18.305 "message": "Input/output error" 00:23:18.305 } 00:23:18.305 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351052 00:23:18.305 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351052 ']' 00:23:18.305 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351052 00:23:18.305 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.305 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.305 22:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351052 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351052' 00:23:18.564 killing process with pid 351052 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351052 00:23:18.564 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.564 00:23:18.564 Latency(us) 00:23:18.564 [2024-12-16T21:29:08.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.564 [2024-12-16T21:29:08.265Z] =================================================================================================================== 00:23:18.564 [2024-12-16T21:29:08.265Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351052 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A5Q3i5TvYu 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A5Q3i5TvYu 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A5Q3i5TvYu 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A5Q3i5TvYu 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351281 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351281 /var/tmp/bdevperf.sock 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351281 ']' 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.564 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.564 [2024-12-16 22:29:08.227899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:18.564 [2024-12-16 22:29:08.227947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351281 ] 00:23:18.823 [2024-12-16 22:29:08.296691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.823 [2024-12-16 22:29:08.316410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.823 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.823 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.823 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5Q3i5TvYu 00:23:19.081 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:19.081 [2024-12-16 22:29:08.767199] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.081 [2024-12-16 22:29:08.771795] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.081 [2024-12-16 22:29:08.771822] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.081 [2024-12-16 22:29:08.771846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.081 [2024-12-16 22:29:08.772524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2312340 (107): Transport endpoint is not connected 00:23:19.081 [2024-12-16 22:29:08.773516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2312340 (9): Bad file descriptor 00:23:19.081 [2024-12-16 22:29:08.774518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:19.081 [2024-12-16 22:29:08.774527] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.081 [2024-12-16 22:29:08.774534] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:19.081 [2024-12-16 22:29:08.774542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:19.081 request: 00:23:19.081 { 00:23:19.081 "name": "TLSTEST", 00:23:19.081 "trtype": "tcp", 00:23:19.081 "traddr": "10.0.0.2", 00:23:19.081 "adrfam": "ipv4", 00:23:19.081 "trsvcid": "4420", 00:23:19.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.081 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.081 "prchk_reftag": false, 00:23:19.081 "prchk_guard": false, 00:23:19.081 "hdgst": false, 00:23:19.081 "ddgst": false, 00:23:19.081 "psk": "key0", 00:23:19.081 "allow_unrecognized_csi": false, 00:23:19.081 "method": "bdev_nvme_attach_controller", 00:23:19.081 "req_id": 1 00:23:19.081 } 00:23:19.081 Got JSON-RPC error response 00:23:19.081 response: 00:23:19.081 { 00:23:19.081 "code": -5, 00:23:19.082 "message": "Input/output error" 00:23:19.082 } 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351281 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351281 ']' 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351281 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351281 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351281' 00:23:19.341 killing process with pid 351281 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351281 00:23:19.341 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.341 00:23:19.341 Latency(us) 00:23:19.341 [2024-12-16T21:29:09.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.341 [2024-12-16T21:29:09.042Z] =================================================================================================================== 00:23:19.341 [2024-12-16T21:29:09.042Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351281 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5Q3i5TvYu 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5Q3i5TvYu 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.341 22:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5Q3i5TvYu 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A5Q3i5TvYu 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351295 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351295 /var/tmp/bdevperf.sock 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351295 ']' 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.341 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.599 [2024-12-16 22:29:09.046470] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:19.599 [2024-12-16 22:29:09.046530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351295 ] 00:23:19.599 [2024-12-16 22:29:09.109699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.599 [2024-12-16 22:29:09.130089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.599 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.599 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.599 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5Q3i5TvYu 00:23:19.857 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.116 [2024-12-16 22:29:09.589219] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.116 [2024-12-16 22:29:09.593874] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.116 [2024-12-16 22:29:09.593896] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.116 [2024-12-16 22:29:09.593919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.116 [2024-12-16 22:29:09.594568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1169340 (107): Transport endpoint is not connected 00:23:20.116 [2024-12-16 22:29:09.595558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1169340 (9): Bad file descriptor 00:23:20.116 [2024-12-16 22:29:09.596560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:20.116 [2024-12-16 22:29:09.596569] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.116 [2024-12-16 22:29:09.596577] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:20.116 [2024-12-16 22:29:09.596584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:20.116 request: 00:23:20.116 { 00:23:20.116 "name": "TLSTEST", 00:23:20.116 "trtype": "tcp", 00:23:20.116 "traddr": "10.0.0.2", 00:23:20.116 "adrfam": "ipv4", 00:23:20.116 "trsvcid": "4420", 00:23:20.116 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.116 "prchk_reftag": false, 00:23:20.116 "prchk_guard": false, 00:23:20.116 "hdgst": false, 00:23:20.116 "ddgst": false, 00:23:20.116 "psk": "key0", 00:23:20.116 "allow_unrecognized_csi": false, 00:23:20.116 "method": "bdev_nvme_attach_controller", 00:23:20.116 "req_id": 1 00:23:20.116 } 00:23:20.116 Got JSON-RPC error response 00:23:20.116 response: 00:23:20.116 { 00:23:20.116 "code": -5, 00:23:20.116 "message": "Input/output error" 00:23:20.116 } 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351295 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351295 ']' 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351295 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351295 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351295' 00:23:20.116 killing process with pid 351295 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351295 00:23:20.116 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.116 00:23:20.116 Latency(us) 00:23:20.116 [2024-12-16T21:29:09.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.116 [2024-12-16T21:29:09.817Z] =================================================================================================================== 00:23:20.116 [2024-12-16T21:29:09.817Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351295 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:20.116 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351520 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351520 /var/tmp/bdevperf.sock 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351520 ']' 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.375 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.375 [2024-12-16 22:29:09.861908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:20.375 [2024-12-16 22:29:09.861953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351520 ] 00:23:20.375 [2024-12-16 22:29:09.931853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.375 [2024-12-16 22:29:09.953522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.375 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.375 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.375 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:20.633 [2024-12-16 22:29:10.221128] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:20.633 [2024-12-16 22:29:10.221162] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:20.633 request: 00:23:20.633 { 00:23:20.633 "name": "key0", 00:23:20.633 "path": "", 00:23:20.633 "method": "keyring_file_add_key", 00:23:20.633 "req_id": 1 00:23:20.633 } 00:23:20.633 Got JSON-RPC error response 00:23:20.633 response: 00:23:20.633 { 00:23:20.633 "code": -1, 00:23:20.633 "message": "Operation not permitted" 00:23:20.633 } 00:23:20.633 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.892 [2024-12-16 22:29:10.429747] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.892 [2024-12-16 22:29:10.429781] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:20.892 request: 00:23:20.892 { 00:23:20.892 "name": "TLSTEST", 00:23:20.892 "trtype": "tcp", 00:23:20.892 "traddr": "10.0.0.2", 00:23:20.892 "adrfam": "ipv4", 00:23:20.892 "trsvcid": "4420", 00:23:20.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.892 "prchk_reftag": false, 00:23:20.892 "prchk_guard": false, 00:23:20.892 "hdgst": false, 00:23:20.892 "ddgst": false, 00:23:20.892 "psk": "key0", 00:23:20.892 "allow_unrecognized_csi": false, 00:23:20.892 "method": "bdev_nvme_attach_controller", 00:23:20.892 "req_id": 1 00:23:20.892 } 00:23:20.893 Got JSON-RPC error response 00:23:20.893 response: 00:23:20.893 { 00:23:20.893 "code": -126, 00:23:20.893 "message": "Required key not available" 00:23:20.893 } 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351520 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351520 ']' 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351520 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351520 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351520' 00:23:20.893 killing process with pid 351520 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351520 00:23:20.893 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.893 00:23:20.893 Latency(us) 00:23:20.893 [2024-12-16T21:29:10.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.893 [2024-12-16T21:29:10.594Z] =================================================================================================================== 00:23:20.893 [2024-12-16T21:29:10.594Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.893 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351520 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 346978 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 346978 ']' 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 346978 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 346978 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 346978' 00:23:21.152 killing process with pid 346978 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 346978 00:23:21.152 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 346978 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.vWxpl3eHQY 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.vWxpl3eHQY 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=351751 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 351751 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351751 ']' 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.411 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.411 [2024-12-16 22:29:10.965903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:21.411 [2024-12-16 22:29:10.965947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.411 [2024-12-16 22:29:11.042784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.411 [2024-12-16 22:29:11.061809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.411 [2024-12-16 22:29:11.061847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.411 [2024-12-16 22:29:11.061854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.411 [2024-12-16 22:29:11.061862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.411 [2024-12-16 22:29:11.061867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.411 [2024-12-16 22:29:11.062374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.vWxpl3eHQY 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vWxpl3eHQY 00:23:21.670 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.670 [2024-12-16 22:29:11.361053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.929 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:21.929 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.187 [2024-12-16 22:29:11.725981] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.187 [2024-12-16 22:29:11.726175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.187 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.445 malloc0 00:23:22.446 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.446 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:22.704 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vWxpl3eHQY 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vWxpl3eHQY 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=352012 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 352012 /var/tmp/bdevperf.sock 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 352012 ']' 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.963 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.963 [2024-12-16 22:29:12.574758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:22.963 [2024-12-16 22:29:12.574803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352012 ] 00:23:22.963 [2024-12-16 22:29:12.642630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.963 [2024-12-16 22:29:12.664820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.222 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.222 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.222 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:23.481 22:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.481 [2024-12-16 22:29:13.103890] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.481 TLSTESTn1 00:23:23.739 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.739 Running I/O for 10 seconds... 00:23:25.610 4983.00 IOPS, 19.46 MiB/s [2024-12-16T21:29:16.686Z] 5261.00 IOPS, 20.55 MiB/s [2024-12-16T21:29:17.622Z] 5233.33 IOPS, 20.44 MiB/s [2024-12-16T21:29:18.558Z] 5205.75 IOPS, 20.33 MiB/s [2024-12-16T21:29:19.494Z] 5266.20 IOPS, 20.57 MiB/s [2024-12-16T21:29:20.429Z] 5259.17 IOPS, 20.54 MiB/s [2024-12-16T21:29:21.365Z] 5290.57 IOPS, 20.67 MiB/s [2024-12-16T21:29:22.741Z] 5232.50 IOPS, 20.44 MiB/s [2024-12-16T21:29:23.677Z] 5226.78 IOPS, 20.42 MiB/s [2024-12-16T21:29:23.677Z] 5161.50 IOPS, 20.16 MiB/s 00:23:33.976 Latency(us) 00:23:33.976 [2024-12-16T21:29:23.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.976 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.976 Verification LBA range: start 0x0 length 0x2000 00:23:33.976 TLSTESTn1 : 10.04 5155.83 20.14 0.00 0.00 24772.37 4805.97 34203.55 00:23:33.976 [2024-12-16T21:29:23.677Z] =================================================================================================================== 00:23:33.976 [2024-12-16T21:29:23.677Z] Total : 5155.83 20.14 0.00 0.00 24772.37 4805.97 34203.55 00:23:33.976 { 00:23:33.976 "results": [ 00:23:33.976 { 00:23:33.976 "job": "TLSTESTn1", 00:23:33.976 "core_mask": "0x4", 00:23:33.976 "workload": "verify", 00:23:33.976 "status": "finished", 00:23:33.976 "verify_range": { 00:23:33.976 "start": 0, 00:23:33.976 "length": 8192 00:23:33.976 }, 00:23:33.976 "queue_depth": 128, 00:23:33.976 "io_size": 4096, 00:23:33.976 "runtime": 10.035435, 00:23:33.976 "iops": 5155.8303152778135, 00:23:33.976 "mibps": 20.13996216905396, 00:23:33.976 "io_failed": 0, 00:23:33.976 "io_timeout": 0, 00:23:33.976 "avg_latency_us": 24772.36866981237, 00:23:33.976 "min_latency_us": 4805.973333333333, 00:23:33.976 "max_latency_us": 34203.550476190474 00:23:33.976 } 00:23:33.976 ], 00:23:33.976 "core_count": 1 00:23:33.976 } 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 352012 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 352012 ']' 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 352012 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352012 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352012' 00:23:33.976 killing process with pid 352012 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 352012 00:23:33.976 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.976 00:23:33.976 Latency(us) 00:23:33.976 [2024-12-16T21:29:23.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.976 [2024-12-16T21:29:23.677Z] =================================================================================================================== 00:23:33.976 [2024-12-16T21:29:23.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 352012 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.vWxpl3eHQY 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vWxpl3eHQY 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vWxpl3eHQY 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vWxpl3eHQY 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vWxpl3eHQY 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353797 00:23:33.976 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353797 /var/tmp/bdevperf.sock 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353797 ']' 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.977 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.977 [2024-12-16 22:29:23.630735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:33.977 [2024-12-16 22:29:23.630783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353797 ] 00:23:34.236 [2024-12-16 22:29:23.701946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.236 [2024-12-16 22:29:23.721674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.236 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.236 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.236 22:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:34.494 [2024-12-16 22:29:23.983934] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vWxpl3eHQY': 0100666 00:23:34.494 [2024-12-16 22:29:23.983966] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:34.494 request: 00:23:34.494 { 00:23:34.494 "name": "key0", 00:23:34.494 "path": "/tmp/tmp.vWxpl3eHQY", 00:23:34.494 "method": "keyring_file_add_key", 00:23:34.494 "req_id": 1 00:23:34.494 } 00:23:34.494 Got JSON-RPC error response 00:23:34.494 response: 00:23:34.494 { 00:23:34.494 "code": -1, 00:23:34.494 "message": "Operation not permitted" 00:23:34.494 } 00:23:34.494 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.494 [2024-12-16 22:29:24.176507] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.494 [2024-12-16 22:29:24.176540] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:34.494 request: 00:23:34.494 { 00:23:34.494 "name": "TLSTEST", 00:23:34.494 "trtype": "tcp", 00:23:34.494 "traddr": "10.0.0.2", 00:23:34.494 "adrfam": "ipv4", 00:23:34.494 "trsvcid": "4420", 00:23:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.494 "prchk_reftag": false, 00:23:34.494 "prchk_guard": false, 00:23:34.494 "hdgst": false, 00:23:34.494 "ddgst": false, 00:23:34.494 "psk": "key0", 00:23:34.494 "allow_unrecognized_csi": false, 00:23:34.494 "method": "bdev_nvme_attach_controller", 00:23:34.494 "req_id": 1 00:23:34.494 } 00:23:34.494 Got JSON-RPC error response 00:23:34.494 response: 00:23:34.494 { 00:23:34.494 "code": -126, 00:23:34.494 "message": "Required key not available" 00:23:34.494 } 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353797 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353797 ']' 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353797 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353797 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353797' 00:23:34.754 killing process with pid 353797 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353797 00:23:34.754 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.754 00:23:34.754 Latency(us) 00:23:34.754 [2024-12-16T21:29:24.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.754 [2024-12-16T21:29:24.455Z] =================================================================================================================== 00:23:34.754 [2024-12-16T21:29:24.455Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353797 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 351751 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351751 ']' 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351751 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.754 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351751 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351751' 00:23:35.013 killing process with pid 351751 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351751 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351751 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=353832 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 353832 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353832 ']' 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.013 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.013 [2024-12-16 22:29:24.681810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:35.013 [2024-12-16 22:29:24.681859] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.272 [2024-12-16 22:29:24.756131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.272 [2024-12-16 22:29:24.775518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.272 [2024-12-16 22:29:24.775554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.272 [2024-12-16 22:29:24.775561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.272 [2024-12-16 22:29:24.775568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.272 [2024-12-16 22:29:24.775575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.272 [2024-12-16 22:29:24.776102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.vWxpl3eHQY 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vWxpl3eHQY 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.vWxpl3eHQY 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vWxpl3eHQY 00:23:35.272 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.530 [2024-12-16 22:29:25.090626] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.530 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:35.787 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:35.787 [2024-12-16 22:29:25.483633] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.787 [2024-12-16 22:29:25.483826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.046 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.046 malloc0 00:23:36.046 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.309 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:36.568 [2024-12-16 22:29:26.040820] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vWxpl3eHQY': 0100666 00:23:36.568 [2024-12-16 22:29:26.040842] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:36.568 request: 00:23:36.568 { 00:23:36.568 "name": "key0", 00:23:36.568 "path": "/tmp/tmp.vWxpl3eHQY", 00:23:36.568 "method": "keyring_file_add_key", 00:23:36.568 "req_id": 1 00:23:36.568 } 00:23:36.568 Got JSON-RPC error response 00:23:36.568 response: 00:23:36.568 { 00:23:36.568 "code": -1, 00:23:36.568 "message": "Operation not permitted" 00:23:36.568 } 00:23:36.568 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.568 [2024-12-16 22:29:26.257398] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:36.568 [2024-12-16 22:29:26.257428] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:36.568 request: 00:23:36.568 { 00:23:36.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.568 "host": "nqn.2016-06.io.spdk:host1", 00:23:36.568 "psk": "key0", 00:23:36.568 "method": "nvmf_subsystem_add_host", 00:23:36.568 "req_id": 1 00:23:36.568 } 00:23:36.568 Got JSON-RPC error response 00:23:36.568 response: 00:23:36.568 { 00:23:36.568 "code": -32603, 00:23:36.568 "message": "Internal error" 00:23:36.568 } 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 353832 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353832 ']' 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353832 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353832 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353832' 00:23:36.827 killing process with pid 353832 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353832 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353832 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.vWxpl3eHQY 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354293 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354293 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354293 ']' 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.827 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.086 [2024-12-16 22:29:26.568583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:37.086 [2024-12-16 22:29:26.568629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.086 [2024-12-16 22:29:26.641747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.086 [2024-12-16 22:29:26.659674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.086 [2024-12-16 22:29:26.659707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.086 [2024-12-16 22:29:26.659714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.086 [2024-12-16 22:29:26.659722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.086 [2024-12-16 22:29:26.659727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.086 [2024-12-16 22:29:26.660220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.086 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.086 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.086 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.086 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.086 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.344 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.344 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.vWxpl3eHQY 00:23:37.344 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vWxpl3eHQY 00:23:37.344 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.344 [2024-12-16 22:29:26.966421] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.344 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.602 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.861 [2024-12-16 22:29:27.367458] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.861 [2024-12-16 22:29:27.367664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.861 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:38.120 malloc0 00:23:38.120 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.120 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:38.379 22:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=354541 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 354541 /var/tmp/bdevperf.sock 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354541 ']' 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.638 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.638 [2024-12-16 22:29:28.184879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:38.638 [2024-12-16 22:29:28.184928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354541 ] 00:23:38.638 [2024-12-16 22:29:28.259805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.638 [2024-12-16 22:29:28.281703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.897 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.897 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.897 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:38.897 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:39.155 [2024-12-16 22:29:28.737083] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.155 TLSTESTn1 00:23:39.155 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:39.725 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:39.725 "subsystems": [ 00:23:39.725 { 00:23:39.725 "subsystem": "keyring", 00:23:39.725 "config": [ 00:23:39.725 { 00:23:39.725 "method": "keyring_file_add_key", 00:23:39.725 "params": { 00:23:39.725 "name": "key0", 00:23:39.725 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:39.725 } 00:23:39.725 } 00:23:39.725 ] 00:23:39.725 }, 00:23:39.725 { 00:23:39.725 "subsystem": "iobuf", 00:23:39.725 "config": [ 00:23:39.725 { 00:23:39.725 "method": "iobuf_set_options", 00:23:39.725 "params": { 00:23:39.725 "small_pool_count": 8192, 00:23:39.725 "large_pool_count": 1024, 00:23:39.725 "small_bufsize": 8192, 00:23:39.725 "large_bufsize": 135168, 00:23:39.725 "enable_numa": false 00:23:39.725 } 00:23:39.725 } 00:23:39.725 ] 00:23:39.725 }, 00:23:39.725 { 00:23:39.725 "subsystem": "sock", 00:23:39.725 "config": [ 00:23:39.725 { 00:23:39.725 "method": "sock_set_default_impl", 00:23:39.725 "params": { 00:23:39.725 "impl_name": "posix" 00:23:39.725 } 00:23:39.725 }, 00:23:39.725 { 00:23:39.725 "method": "sock_impl_set_options", 00:23:39.725 "params": { 00:23:39.725 "impl_name": "ssl", 00:23:39.725 "recv_buf_size": 4096, 00:23:39.725 "send_buf_size": 4096, 00:23:39.725 "enable_recv_pipe": true, 00:23:39.725 "enable_quickack": false, 00:23:39.725 "enable_placement_id": 0, 00:23:39.725 "enable_zerocopy_send_server": true, 00:23:39.725 "enable_zerocopy_send_client": false, 00:23:39.725 "zerocopy_threshold": 0, 00:23:39.725 "tls_version": 0, 00:23:39.725 "enable_ktls": false 00:23:39.725 } 00:23:39.725 }, 00:23:39.725 { 00:23:39.725 "method": "sock_impl_set_options", 00:23:39.725 "params": { 00:23:39.725 "impl_name": "posix", 00:23:39.725 "recv_buf_size": 2097152, 00:23:39.725 "send_buf_size": 2097152, 00:23:39.725 "enable_recv_pipe": true, 00:23:39.725 "enable_quickack": false, 00:23:39.725 "enable_placement_id": 0, 00:23:39.725 "enable_zerocopy_send_server": true, 00:23:39.725 "enable_zerocopy_send_client": false, 00:23:39.725 "zerocopy_threshold": 0, 00:23:39.725 "tls_version": 0, 00:23:39.725 "enable_ktls": false 00:23:39.725 } 00:23:39.725 } 00:23:39.725 ] 00:23:39.725 }, 00:23:39.725 { 00:23:39.726 "subsystem": "vmd", 00:23:39.726 "config": [] 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "subsystem": "accel", 00:23:39.726 "config": [ 00:23:39.726 { 00:23:39.726 "method": "accel_set_options", 00:23:39.726 "params": { 00:23:39.726 "small_cache_size": 128, 00:23:39.726 "large_cache_size": 16, 00:23:39.726 "task_count": 2048, 00:23:39.726 "sequence_count": 2048, 00:23:39.726 "buf_count": 2048 00:23:39.726 } 00:23:39.726 } 00:23:39.726 ] 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "subsystem": "bdev", 00:23:39.726 "config": [ 00:23:39.726 { 00:23:39.726 "method": "bdev_set_options", 00:23:39.726 "params": { 00:23:39.726 "bdev_io_pool_size": 65535, 00:23:39.726 "bdev_io_cache_size": 256, 00:23:39.726 "bdev_auto_examine": true, 00:23:39.726 "iobuf_small_cache_size": 128, 00:23:39.726 "iobuf_large_cache_size": 16 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "bdev_raid_set_options", 00:23:39.726 "params": { 00:23:39.726 "process_window_size_kb": 1024, 00:23:39.726 "process_max_bandwidth_mb_sec": 0 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "bdev_iscsi_set_options", 00:23:39.726 "params": { 00:23:39.726 "timeout_sec": 30 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "bdev_nvme_set_options", 00:23:39.726 "params": { 00:23:39.726 "action_on_timeout": "none", 00:23:39.726 "timeout_us": 0, 00:23:39.726 "timeout_admin_us": 0, 00:23:39.726 "keep_alive_timeout_ms": 10000, 00:23:39.726 "arbitration_burst": 0, 00:23:39.726 "low_priority_weight": 0, 00:23:39.726 "medium_priority_weight": 0, 00:23:39.726 "high_priority_weight": 0, 00:23:39.726 "nvme_adminq_poll_period_us": 10000, 00:23:39.726 "nvme_ioq_poll_period_us": 0, 00:23:39.726 "io_queue_requests": 0, 00:23:39.726 "delay_cmd_submit": true, 00:23:39.726 "transport_retry_count": 4, 00:23:39.726 "bdev_retry_count": 3, 00:23:39.726 "transport_ack_timeout": 0, 00:23:39.726 "ctrlr_loss_timeout_sec": 0, 00:23:39.726 "reconnect_delay_sec": 0, 00:23:39.726 "fast_io_fail_timeout_sec": 0, 00:23:39.726 "disable_auto_failback": false, 00:23:39.726 "generate_uuids": false, 00:23:39.726 "transport_tos": 0, 00:23:39.726 "nvme_error_stat": false, 00:23:39.726 "rdma_srq_size": 0, 00:23:39.726 "io_path_stat": false, 00:23:39.726 "allow_accel_sequence": false, 00:23:39.726 "rdma_max_cq_size": 0, 00:23:39.726 "rdma_cm_event_timeout_ms": 0, 00:23:39.726 "dhchap_digests": [ 00:23:39.726 "sha256", 00:23:39.726 "sha384", 00:23:39.726 "sha512" 00:23:39.726 ], 00:23:39.726 "dhchap_dhgroups": [ 00:23:39.726 "null", 00:23:39.726 "ffdhe2048", 00:23:39.726 "ffdhe3072", 00:23:39.726 "ffdhe4096", 00:23:39.726 "ffdhe6144", 00:23:39.726 "ffdhe8192" 00:23:39.726 ], 00:23:39.726 "rdma_umr_per_io": false 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "bdev_nvme_set_hotplug", 00:23:39.726 "params": { 00:23:39.726 "period_us": 100000, 00:23:39.726 "enable": false 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "bdev_malloc_create", 00:23:39.726 "params": { 00:23:39.726 "name": "malloc0", 00:23:39.726 "num_blocks": 8192, 00:23:39.726 "block_size": 4096, 00:23:39.726 "physical_block_size": 4096, 00:23:39.726 "uuid": "1b825a26-8bf3-4816-8b83-70fc2be30a0e", 00:23:39.726 "optimal_io_boundary": 0, 00:23:39.726 "md_size": 0, 00:23:39.726 "dif_type": 0, 00:23:39.726 "dif_is_head_of_md": false, 00:23:39.726 "dif_pi_format": 0 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "bdev_wait_for_examine" 00:23:39.726 } 00:23:39.726 ] 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "subsystem": "nbd", 00:23:39.726 "config": [] 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "subsystem": "scheduler", 00:23:39.726 "config": [ 00:23:39.726 { 00:23:39.726 "method": "framework_set_scheduler", 00:23:39.726 "params": { 00:23:39.726 "name": "static" 00:23:39.726 } 00:23:39.726 } 00:23:39.726 ] 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "subsystem": "nvmf", 00:23:39.726 "config": [ 00:23:39.726 { 00:23:39.726 "method": "nvmf_set_config", 00:23:39.726 "params": { 00:23:39.726 "discovery_filter": "match_any", 00:23:39.726 "admin_cmd_passthru": { 00:23:39.726 "identify_ctrlr": false 00:23:39.726 }, 00:23:39.726 "dhchap_digests": [ 00:23:39.726 "sha256", 00:23:39.726 "sha384", 00:23:39.726 "sha512" 00:23:39.726 ], 00:23:39.726 "dhchap_dhgroups": [ 00:23:39.726 "null", 00:23:39.726 "ffdhe2048", 00:23:39.726 "ffdhe3072", 00:23:39.726 "ffdhe4096", 00:23:39.726 "ffdhe6144", 00:23:39.726 "ffdhe8192" 00:23:39.726 ] 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "nvmf_set_max_subsystems", 00:23:39.726 "params": { 00:23:39.726 "max_subsystems": 1024 00:23:39.726 } 00:23:39.726 }, 00:23:39.726 { 00:23:39.726 "method": "nvmf_set_crdt", 00:23:39.726 "params": { 00:23:39.727 "crdt1": 0, 00:23:39.727 "crdt2": 0, 00:23:39.727 "crdt3": 0 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "nvmf_create_transport", 00:23:39.727 "params": { 00:23:39.727 "trtype": "TCP", 00:23:39.727 "max_queue_depth": 128, 00:23:39.727 "max_io_qpairs_per_ctrlr": 127, 00:23:39.727 "in_capsule_data_size": 4096, 00:23:39.727 "max_io_size": 131072, 00:23:39.727 "io_unit_size": 131072, 00:23:39.727 "max_aq_depth": 128, 00:23:39.727 "num_shared_buffers": 511, 00:23:39.727 "buf_cache_size": 4294967295, 00:23:39.727 "dif_insert_or_strip": false, 00:23:39.727 "zcopy": false, 00:23:39.727 "c2h_success": false, 00:23:39.727 "sock_priority": 0, 00:23:39.727 "abort_timeout_sec": 1, 00:23:39.727 "ack_timeout": 0, 00:23:39.727 "data_wr_pool_size": 0 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "nvmf_create_subsystem", 00:23:39.727 "params": { 00:23:39.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.727 "allow_any_host": false, 00:23:39.727 "serial_number": "SPDK00000000000001", 00:23:39.727 "model_number": "SPDK bdev Controller", 00:23:39.727 "max_namespaces": 10, 00:23:39.727 "min_cntlid": 1, 00:23:39.727 "max_cntlid": 65519, 00:23:39.727 "ana_reporting": false 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "nvmf_subsystem_add_host", 00:23:39.727 "params": { 00:23:39.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.727 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.727 "psk": "key0" 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "nvmf_subsystem_add_ns", 00:23:39.727 "params": { 00:23:39.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.727 "namespace": { 00:23:39.727 "nsid": 1, 00:23:39.727 "bdev_name": "malloc0", 00:23:39.727 "nguid": "1B825A268BF348168B8370FC2BE30A0E", 00:23:39.727 "uuid": "1b825a26-8bf3-4816-8b83-70fc2be30a0e", 00:23:39.727 "no_auto_visible": false 00:23:39.727 } 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "nvmf_subsystem_add_listener", 00:23:39.727 "params": { 00:23:39.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.727 "listen_address": { 00:23:39.727 "trtype": "TCP", 00:23:39.727 "adrfam": "IPv4", 00:23:39.727 "traddr": "10.0.0.2", 00:23:39.727 "trsvcid": "4420" 00:23:39.727 }, 00:23:39.727 "secure_channel": true 00:23:39.727 } 00:23:39.727 } 00:23:39.727 ] 00:23:39.727 } 00:23:39.727 ] 00:23:39.727 }' 00:23:39.727 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:39.727 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:39.727 "subsystems": [ 00:23:39.727 { 00:23:39.727 "subsystem": "keyring", 00:23:39.727 "config": [ 00:23:39.727 { 00:23:39.727 "method": "keyring_file_add_key", 00:23:39.727 "params": { 00:23:39.727 "name": "key0", 00:23:39.727 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:39.727 } 00:23:39.727 } 00:23:39.727 ] 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "subsystem": "iobuf", 00:23:39.727 "config": [ 00:23:39.727 { 00:23:39.727 "method": "iobuf_set_options", 00:23:39.727 "params": { 00:23:39.727 "small_pool_count": 8192, 00:23:39.727 "large_pool_count": 1024, 00:23:39.727 "small_bufsize": 8192, 00:23:39.727 "large_bufsize": 135168, 00:23:39.727 "enable_numa": false 00:23:39.727 } 00:23:39.727 } 00:23:39.727 ] 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "subsystem": "sock", 00:23:39.727 "config": [ 00:23:39.727 { 00:23:39.727 "method": "sock_set_default_impl", 00:23:39.727 "params": { 00:23:39.727 "impl_name": "posix" 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "sock_impl_set_options", 00:23:39.727 "params": { 00:23:39.727 "impl_name": "ssl", 00:23:39.727 "recv_buf_size": 4096, 00:23:39.727 "send_buf_size": 4096, 00:23:39.727 "enable_recv_pipe": true, 00:23:39.727 "enable_quickack": false, 00:23:39.727 "enable_placement_id": 0, 00:23:39.727 "enable_zerocopy_send_server": true, 00:23:39.727 "enable_zerocopy_send_client": false, 00:23:39.727 "zerocopy_threshold": 0, 00:23:39.727 "tls_version": 0, 00:23:39.727 "enable_ktls": false 00:23:39.727 } 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "method": "sock_impl_set_options", 00:23:39.727 "params": { 00:23:39.727 "impl_name": "posix", 00:23:39.727 "recv_buf_size": 2097152, 00:23:39.727 "send_buf_size": 2097152, 00:23:39.727 "enable_recv_pipe": true, 00:23:39.727 "enable_quickack": false, 00:23:39.727 "enable_placement_id": 0, 00:23:39.727 "enable_zerocopy_send_server": true, 00:23:39.727 "enable_zerocopy_send_client": false, 00:23:39.727 "zerocopy_threshold": 0, 00:23:39.727 "tls_version": 0, 00:23:39.727 "enable_ktls": false 00:23:39.727 } 00:23:39.727 } 00:23:39.727 ] 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "subsystem": "vmd", 00:23:39.727 "config": [] 00:23:39.727 }, 00:23:39.727 { 00:23:39.727 "subsystem": "accel", 00:23:39.727 "config": [ 00:23:39.727 { 00:23:39.727 "method": "accel_set_options", 00:23:39.727 "params": { 00:23:39.727 "small_cache_size": 128, 00:23:39.727 "large_cache_size": 16, 00:23:39.728 "task_count": 2048, 00:23:39.728 "sequence_count": 2048, 00:23:39.728 "buf_count": 2048 00:23:39.728 } 00:23:39.728 } 00:23:39.728 ] 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "subsystem": "bdev", 00:23:39.728 "config": [ 00:23:39.728 { 00:23:39.728 "method": "bdev_set_options", 00:23:39.728 "params": { 00:23:39.728 "bdev_io_pool_size": 65535, 00:23:39.728 "bdev_io_cache_size": 256, 00:23:39.728 "bdev_auto_examine": true, 00:23:39.728 "iobuf_small_cache_size": 128, 00:23:39.728 "iobuf_large_cache_size": 16 00:23:39.728 } 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "method": "bdev_raid_set_options", 00:23:39.728 "params": { 00:23:39.728 "process_window_size_kb": 1024, 00:23:39.728 "process_max_bandwidth_mb_sec": 0 00:23:39.728 } 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "method": "bdev_iscsi_set_options", 00:23:39.728 "params": { 00:23:39.728 "timeout_sec": 30 00:23:39.728 } 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "method": "bdev_nvme_set_options", 00:23:39.728 "params": { 00:23:39.728 "action_on_timeout": "none", 00:23:39.728 "timeout_us": 0, 00:23:39.728 "timeout_admin_us": 0, 00:23:39.728 "keep_alive_timeout_ms": 10000, 00:23:39.728 "arbitration_burst": 0, 00:23:39.728 "low_priority_weight": 0, 00:23:39.728 "medium_priority_weight": 0, 00:23:39.728 "high_priority_weight": 0, 00:23:39.728 "nvme_adminq_poll_period_us": 10000, 00:23:39.728 "nvme_ioq_poll_period_us": 0, 00:23:39.728 "io_queue_requests": 512, 00:23:39.728 "delay_cmd_submit": true, 00:23:39.728 "transport_retry_count": 4, 00:23:39.728 "bdev_retry_count": 3, 00:23:39.728 "transport_ack_timeout": 0, 00:23:39.728 "ctrlr_loss_timeout_sec": 0, 00:23:39.728 "reconnect_delay_sec": 0, 00:23:39.728 "fast_io_fail_timeout_sec": 0, 00:23:39.728 "disable_auto_failback": false, 00:23:39.728 "generate_uuids": false, 00:23:39.728 "transport_tos": 0, 00:23:39.728 "nvme_error_stat": false, 00:23:39.728 "rdma_srq_size": 0, 00:23:39.728 "io_path_stat": false, 00:23:39.728 "allow_accel_sequence": false, 00:23:39.728 "rdma_max_cq_size": 0, 00:23:39.728 "rdma_cm_event_timeout_ms": 0, 00:23:39.728 "dhchap_digests": [ 00:23:39.728 "sha256", 00:23:39.728 "sha384", 00:23:39.728 "sha512" 00:23:39.728 ], 00:23:39.728 "dhchap_dhgroups": [ 00:23:39.728 "null", 00:23:39.728 "ffdhe2048", 00:23:39.728 "ffdhe3072", 00:23:39.728 "ffdhe4096", 00:23:39.728 "ffdhe6144", 00:23:39.728 "ffdhe8192" 00:23:39.728 ], 00:23:39.728 "rdma_umr_per_io": false 00:23:39.728 } 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "method": "bdev_nvme_attach_controller", 00:23:39.728 "params": { 00:23:39.728 "name": "TLSTEST", 00:23:39.728 "trtype": "TCP", 00:23:39.728 "adrfam": "IPv4", 00:23:39.728 "traddr": "10.0.0.2", 00:23:39.728 "trsvcid": "4420", 00:23:39.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.728 "prchk_reftag": false, 00:23:39.728 "prchk_guard": false, 00:23:39.728 "ctrlr_loss_timeout_sec": 0, 00:23:39.728 "reconnect_delay_sec": 0, 00:23:39.728 "fast_io_fail_timeout_sec": 0, 00:23:39.728 "psk": "key0", 00:23:39.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.728 "hdgst": false, 00:23:39.728 "ddgst": false, 00:23:39.728 "multipath": "multipath" 00:23:39.728 } 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "method": "bdev_nvme_set_hotplug", 00:23:39.728 "params": { 00:23:39.728 "period_us": 100000, 00:23:39.728 "enable": false 00:23:39.728 } 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "method": "bdev_wait_for_examine" 00:23:39.728 } 00:23:39.728 ] 00:23:39.728 }, 00:23:39.728 { 00:23:39.728 "subsystem": "nbd", 00:23:39.728 "config": [] 00:23:39.728 } 00:23:39.728 ] 00:23:39.728 }' 00:23:39.728 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 354541 00:23:39.728 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354541 ']' 00:23:39.728 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354541 00:23:39.728 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.728 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.728 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354541 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354541' 00:23:39.988 killing process with pid 354541 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354541 00:23:39.988 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.988 00:23:39.988 Latency(us) 00:23:39.988 [2024-12-16T21:29:29.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.988 [2024-12-16T21:29:29.689Z] =================================================================================================================== 00:23:39.988 [2024-12-16T21:29:29.689Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354541 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 354293 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354293 ']' 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354293 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354293 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354293' 00:23:39.988 killing process with pid 354293 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354293 00:23:39.988 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354293 00:23:40.248 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:40.248 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.248 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.248 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:40.248 "subsystems": [ 00:23:40.248 { 00:23:40.248 "subsystem": "keyring", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "keyring_file_add_key", 00:23:40.248 "params": { 00:23:40.248 "name": "key0", 00:23:40.248 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:40.248 } 00:23:40.248 } 00:23:40.248 ] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "iobuf", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "iobuf_set_options", 00:23:40.248 "params": { 00:23:40.248 "small_pool_count": 8192, 00:23:40.248 "large_pool_count": 1024, 00:23:40.248 "small_bufsize": 8192, 00:23:40.248 "large_bufsize": 135168, 00:23:40.248 "enable_numa": false 00:23:40.248 } 00:23:40.248 } 00:23:40.248 ] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "sock", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "sock_set_default_impl", 00:23:40.248 "params": { 00:23:40.248 "impl_name": "posix" 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "sock_impl_set_options", 00:23:40.248 "params": { 00:23:40.248 "impl_name": "ssl", 00:23:40.248 "recv_buf_size": 4096, 00:23:40.248 "send_buf_size": 4096, 00:23:40.248 "enable_recv_pipe": true, 00:23:40.248 "enable_quickack": false, 00:23:40.248 "enable_placement_id": 0, 00:23:40.248 "enable_zerocopy_send_server": true, 00:23:40.248 "enable_zerocopy_send_client": false, 00:23:40.248 "zerocopy_threshold": 0, 00:23:40.248 "tls_version": 0, 00:23:40.248 "enable_ktls": false 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "sock_impl_set_options", 00:23:40.248 "params": { 00:23:40.248 "impl_name": "posix", 00:23:40.248 "recv_buf_size": 2097152, 00:23:40.248 "send_buf_size": 2097152, 00:23:40.248 "enable_recv_pipe": true, 00:23:40.248 "enable_quickack": false, 00:23:40.248 "enable_placement_id": 0, 00:23:40.248 "enable_zerocopy_send_server": true, 00:23:40.248 "enable_zerocopy_send_client": false, 00:23:40.248 "zerocopy_threshold": 0, 00:23:40.248 "tls_version": 0, 00:23:40.248 "enable_ktls": false 00:23:40.248 } 00:23:40.248 } 00:23:40.248 ] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "vmd", 00:23:40.248 "config": [] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "accel", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "accel_set_options", 00:23:40.248 "params": { 00:23:40.248 "small_cache_size": 128, 00:23:40.248 "large_cache_size": 16, 00:23:40.248 "task_count": 2048, 00:23:40.248 "sequence_count": 2048, 00:23:40.248 "buf_count": 2048 00:23:40.248 } 00:23:40.248 } 00:23:40.248 ] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "bdev", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "bdev_set_options", 00:23:40.248 "params": { 00:23:40.248 "bdev_io_pool_size": 65535, 00:23:40.248 "bdev_io_cache_size": 256, 00:23:40.248 "bdev_auto_examine": true, 00:23:40.248 "iobuf_small_cache_size": 128, 00:23:40.248 "iobuf_large_cache_size": 16 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "bdev_raid_set_options", 00:23:40.248 "params": { 00:23:40.248 "process_window_size_kb": 1024, 00:23:40.248 "process_max_bandwidth_mb_sec": 0 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "bdev_iscsi_set_options", 00:23:40.248 "params": { 00:23:40.248 "timeout_sec": 30 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "bdev_nvme_set_options", 00:23:40.248 "params": { 00:23:40.248 "action_on_timeout": "none", 00:23:40.248 "timeout_us": 0, 00:23:40.248 "timeout_admin_us": 0, 00:23:40.248 "keep_alive_timeout_ms": 10000, 00:23:40.248 "arbitration_burst": 0, 00:23:40.248 "low_priority_weight": 0, 00:23:40.248 "medium_priority_weight": 0, 00:23:40.248 "high_priority_weight": 0, 00:23:40.248 "nvme_adminq_poll_period_us": 10000, 00:23:40.248 "nvme_ioq_poll_period_us": 0, 00:23:40.248 "io_queue_requests": 0, 00:23:40.248 "delay_cmd_submit": true, 00:23:40.248 "transport_retry_count": 4, 00:23:40.248 "bdev_retry_count": 3, 00:23:40.248 "transport_ack_timeout": 0, 00:23:40.248 "ctrlr_loss_timeout_sec": 0, 00:23:40.248 "reconnect_delay_sec": 0, 00:23:40.248 "fast_io_fail_timeout_sec": 0, 00:23:40.248 "disable_auto_failback": false, 00:23:40.248 "generate_uuids": false, 00:23:40.248 "transport_tos": 0, 00:23:40.248 "nvme_error_stat": false, 00:23:40.248 "rdma_srq_size": 0, 00:23:40.248 "io_path_stat": false, 00:23:40.248 "allow_accel_sequence": false, 00:23:40.248 "rdma_max_cq_size": 0, 00:23:40.248 "rdma_cm_event_timeout_ms": 0, 00:23:40.248 "dhchap_digests": [ 00:23:40.248 "sha256", 00:23:40.248 "sha384", 00:23:40.248 "sha512" 00:23:40.248 ], 00:23:40.248 "dhchap_dhgroups": [ 00:23:40.248 "null", 00:23:40.248 "ffdhe2048", 00:23:40.248 "ffdhe3072", 00:23:40.248 "ffdhe4096", 00:23:40.248 "ffdhe6144", 00:23:40.248 "ffdhe8192" 00:23:40.248 ], 00:23:40.248 "rdma_umr_per_io": false 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "bdev_nvme_set_hotplug", 00:23:40.248 "params": { 00:23:40.248 "period_us": 100000, 00:23:40.248 "enable": false 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "bdev_malloc_create", 00:23:40.248 "params": { 00:23:40.248 "name": "malloc0", 00:23:40.248 "num_blocks": 8192, 00:23:40.248 "block_size": 4096, 00:23:40.248 "physical_block_size": 4096, 00:23:40.248 "uuid": "1b825a26-8bf3-4816-8b83-70fc2be30a0e", 00:23:40.248 "optimal_io_boundary": 0, 00:23:40.248 "md_size": 0, 00:23:40.248 "dif_type": 0, 00:23:40.248 "dif_is_head_of_md": false, 00:23:40.248 "dif_pi_format": 0 00:23:40.248 } 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "method": "bdev_wait_for_examine" 00:23:40.248 } 00:23:40.248 ] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "nbd", 00:23:40.248 "config": [] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "scheduler", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "framework_set_scheduler", 00:23:40.248 "params": { 00:23:40.248 "name": "static" 00:23:40.248 } 00:23:40.248 } 00:23:40.248 ] 00:23:40.248 }, 00:23:40.248 { 00:23:40.248 "subsystem": "nvmf", 00:23:40.248 "config": [ 00:23:40.248 { 00:23:40.248 "method": "nvmf_set_config", 00:23:40.248 "params": { 00:23:40.248 "discovery_filter": "match_any", 00:23:40.248 "admin_cmd_passthru": { 00:23:40.248 "identify_ctrlr": false 00:23:40.248 }, 00:23:40.248 "dhchap_digests": [ 00:23:40.248 "sha256", 00:23:40.248 "sha384", 00:23:40.248 "sha512" 00:23:40.248 ], 00:23:40.248 "dhchap_dhgroups": [ 00:23:40.248 "null", 00:23:40.248 "ffdhe2048", 00:23:40.248 "ffdhe3072", 00:23:40.248 "ffdhe4096", 00:23:40.248 "ffdhe6144", 00:23:40.248 "ffdhe8192" 00:23:40.249 ] 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_set_max_subsystems", 00:23:40.249 "params": { 00:23:40.249 "max_subsystems": 1024 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_set_crdt", 00:23:40.249 "params": { 00:23:40.249 "crdt1": 0, 00:23:40.249 "crdt2": 0, 00:23:40.249 "crdt3": 0 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_create_transport", 00:23:40.249 "params": { 00:23:40.249 "trtype": "TCP", 00:23:40.249 "max_queue_depth": 128, 00:23:40.249 "max_io_qpairs_per_ctrlr": 127, 00:23:40.249 "in_capsule_data_size": 4096, 00:23:40.249 "max_io_size": 131072, 00:23:40.249 "io_unit_size": 131072, 00:23:40.249 "max_aq_depth": 128, 00:23:40.249 "num_shared_buffers": 511, 00:23:40.249 "buf_cache_size": 4294967295, 00:23:40.249 "dif_insert_or_strip": false, 00:23:40.249 "zcopy": false, 00:23:40.249 "c2h_success": false, 00:23:40.249 "sock_priority": 0, 00:23:40.249 "abort_timeout_sec": 1, 00:23:40.249 "ack_timeout": 0, 00:23:40.249 "data_wr_pool_size": 0 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_create_subsystem", 00:23:40.249 "params": { 00:23:40.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.249 "allow_any_host": false, 00:23:40.249 "serial_number": "SPDK00000000000001", 00:23:40.249 "model_number": "SPDK bdev Controller", 00:23:40.249 "max_namespaces": 10, 00:23:40.249 "min_cntlid": 1, 00:23:40.249 "max_cntlid": 65519, 00:23:40.249 "ana_reporting": false 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_subsystem_add_host", 00:23:40.249 "params": { 00:23:40.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.249 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.249 "psk": "key0" 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_subsystem_add_ns", 00:23:40.249 "params": { 00:23:40.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.249 "namespace": { 00:23:40.249 "nsid": 1, 00:23:40.249 "bdev_name": "malloc0", 00:23:40.249 "nguid": "1B825A268BF348168B8370FC2BE30A0E", 00:23:40.249 "uuid": "1b825a26-8bf3-4816-8b83-70fc2be30a0e", 00:23:40.249 "no_auto_visible": false 00:23:40.249 } 00:23:40.249 } 00:23:40.249 }, 00:23:40.249 { 00:23:40.249 "method": "nvmf_subsystem_add_listener", 00:23:40.249 "params": { 00:23:40.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.249 "listen_address": { 00:23:40.249 "trtype": "TCP", 00:23:40.249 "adrfam": "IPv4", 00:23:40.249 "traddr": "10.0.0.2", 00:23:40.249 "trsvcid": "4420" 00:23:40.249 }, 00:23:40.249 "secure_channel": true 00:23:40.249 } 00:23:40.249 } 00:23:40.249 ] 00:23:40.249 } 00:23:40.249 ] 00:23:40.249 }' 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354789 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354789 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354789 ']' 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.249 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.249 [2024-12-16 22:29:29.858550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:40.249 [2024-12-16 22:29:29.858595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.249 [2024-12-16 22:29:29.937626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.508 [2024-12-16 22:29:29.958901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.508 [2024-12-16 22:29:29.958935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.508 [2024-12-16 22:29:29.958942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.508 [2024-12-16 22:29:29.958947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.508 [2024-12-16 22:29:29.958952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.508 [2024-12-16 22:29:29.959480] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.508 [2024-12-16 22:29:30.168979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.508 [2024-12-16 22:29:30.201002] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.508 [2024-12-16 22:29:30.201201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=355029 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 355029 /var/tmp/bdevperf.sock 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 355029 ']' 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.076 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:41.076 "subsystems": [ 00:23:41.076 { 00:23:41.076 "subsystem": "keyring", 00:23:41.076 "config": [ 00:23:41.077 { 00:23:41.077 "method": "keyring_file_add_key", 00:23:41.077 "params": { 00:23:41.077 "name": "key0", 00:23:41.077 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:41.077 } 00:23:41.077 } 00:23:41.077 ] 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "subsystem": "iobuf", 00:23:41.077 "config": [ 00:23:41.077 { 00:23:41.077 "method": "iobuf_set_options", 00:23:41.077 "params": { 00:23:41.077 "small_pool_count": 8192, 00:23:41.077 "large_pool_count": 1024, 00:23:41.077 "small_bufsize": 8192, 00:23:41.077 "large_bufsize": 135168, 00:23:41.077 "enable_numa": false 00:23:41.077 } 00:23:41.077 } 00:23:41.077 ] 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "subsystem": "sock", 00:23:41.077 "config": [ 00:23:41.077 { 00:23:41.077 "method": "sock_set_default_impl", 00:23:41.077 "params": { 00:23:41.077 "impl_name": "posix" 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "sock_impl_set_options", 00:23:41.077 "params": { 00:23:41.077 "impl_name": "ssl", 00:23:41.077 "recv_buf_size": 4096, 00:23:41.077 "send_buf_size": 4096, 00:23:41.077 "enable_recv_pipe": true, 00:23:41.077 "enable_quickack": false, 00:23:41.077 "enable_placement_id": 0, 00:23:41.077 "enable_zerocopy_send_server": true, 00:23:41.077 "enable_zerocopy_send_client": false, 00:23:41.077 "zerocopy_threshold": 0, 00:23:41.077 "tls_version": 0, 00:23:41.077 "enable_ktls": false 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "sock_impl_set_options", 00:23:41.077 "params": { 00:23:41.077 "impl_name": "posix", 00:23:41.077 "recv_buf_size": 2097152, 00:23:41.077 "send_buf_size": 2097152, 00:23:41.077 "enable_recv_pipe": true, 00:23:41.077 "enable_quickack": false, 00:23:41.077 "enable_placement_id": 0, 00:23:41.077 "enable_zerocopy_send_server": true, 00:23:41.077 "enable_zerocopy_send_client": false, 00:23:41.077 "zerocopy_threshold": 0, 00:23:41.077 "tls_version": 0, 00:23:41.077 "enable_ktls": false 00:23:41.077 } 00:23:41.077 } 00:23:41.077 ] 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "subsystem": "vmd", 00:23:41.077 "config": [] 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "subsystem": "accel", 00:23:41.077 "config": [ 00:23:41.077 { 00:23:41.077 "method": "accel_set_options", 00:23:41.077 "params": { 00:23:41.077 "small_cache_size": 128, 00:23:41.077 "large_cache_size": 16, 00:23:41.077 "task_count": 2048, 00:23:41.077 "sequence_count": 2048, 00:23:41.077 "buf_count": 2048 00:23:41.077 } 00:23:41.077 } 00:23:41.077 ] 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "subsystem": "bdev", 00:23:41.077 "config": [ 00:23:41.077 { 00:23:41.077 "method": "bdev_set_options", 00:23:41.077 "params": { 00:23:41.077 "bdev_io_pool_size": 65535, 00:23:41.077 "bdev_io_cache_size": 256, 00:23:41.077 "bdev_auto_examine": true, 00:23:41.077 "iobuf_small_cache_size": 128, 00:23:41.077 "iobuf_large_cache_size": 16 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "bdev_raid_set_options", 00:23:41.077 "params": { 00:23:41.077 "process_window_size_kb": 1024, 00:23:41.077 "process_max_bandwidth_mb_sec": 0 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "bdev_iscsi_set_options", 00:23:41.077 "params": { 00:23:41.077 "timeout_sec": 30 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "bdev_nvme_set_options", 00:23:41.077 "params": { 00:23:41.077 "action_on_timeout": "none", 00:23:41.077 "timeout_us": 0, 00:23:41.077 "timeout_admin_us": 0, 00:23:41.077 "keep_alive_timeout_ms": 10000, 00:23:41.077 "arbitration_burst": 0, 00:23:41.077 "low_priority_weight": 0, 00:23:41.077 "medium_priority_weight": 0, 00:23:41.077 "high_priority_weight": 0, 00:23:41.077 "nvme_adminq_poll_period_us": 10000, 00:23:41.077 "nvme_ioq_poll_period_us": 0, 00:23:41.077 "io_queue_requests": 512, 00:23:41.077 "delay_cmd_submit": true, 00:23:41.077 "transport_retry_count": 4, 00:23:41.077 "bdev_retry_count": 3, 00:23:41.077 "transport_ack_timeout": 0, 00:23:41.077 "ctrlr_loss_timeout_sec": 0, 00:23:41.077 "reconnect_delay_sec": 0, 00:23:41.077 "fast_io_fail_timeout_sec": 0, 00:23:41.077 "disable_auto_failback": false, 00:23:41.077 "generate_uuids": false, 00:23:41.077 "transport_tos": 0, 00:23:41.077 "nvme_error_stat": false, 00:23:41.077 "rdma_srq_size": 0, 00:23:41.077 "io_path_stat": false, 00:23:41.077 "allow_accel_sequence": false, 00:23:41.077 "rdma_max_cq_size": 0, 00:23:41.077 "rdma_cm_event_timeout_ms": 0, 00:23:41.077 "dhchap_digests": [ 00:23:41.077 "sha256", 00:23:41.077 "sha384", 00:23:41.077 "sha512" 00:23:41.077 ], 00:23:41.077 "dhchap_dhgroups": [ 00:23:41.077 "null", 00:23:41.077 "ffdhe2048", 00:23:41.077 "ffdhe3072", 00:23:41.077 "ffdhe4096", 00:23:41.077 "ffdhe6144", 00:23:41.077 "ffdhe8192" 00:23:41.077 ], 00:23:41.077 "rdma_umr_per_io": false 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "bdev_nvme_attach_controller", 00:23:41.077 "params": { 00:23:41.077 "name": "TLSTEST", 00:23:41.077 "trtype": "TCP", 00:23:41.077 "adrfam": "IPv4", 00:23:41.077 "traddr": "10.0.0.2", 00:23:41.077 "trsvcid": "4420", 00:23:41.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.077 "prchk_reftag": false, 00:23:41.077 "prchk_guard": false, 00:23:41.077 "ctrlr_loss_timeout_sec": 0, 00:23:41.077 "reconnect_delay_sec": 0, 00:23:41.077 "fast_io_fail_timeout_sec": 0, 00:23:41.077 "psk": "key0", 00:23:41.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.077 "hdgst": false, 00:23:41.077 "ddgst": false, 00:23:41.077 "multipath": "multipath" 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "bdev_nvme_set_hotplug", 00:23:41.077 "params": { 00:23:41.077 "period_us": 100000, 00:23:41.077 "enable": false 00:23:41.077 } 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "method": "bdev_wait_for_examine" 00:23:41.077 } 00:23:41.077 ] 00:23:41.077 }, 00:23:41.077 { 00:23:41.077 "subsystem": "nbd", 00:23:41.077 "config": [] 00:23:41.077 } 00:23:41.077 ] 00:23:41.077 }' 00:23:41.077 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.077 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.077 22:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.077 [2024-12-16 22:29:30.765841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:41.077 [2024-12-16 22:29:30.765886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355029 ] 00:23:41.336 [2024-12-16 22:29:30.839448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.336 [2024-12-16 22:29:30.862238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.336 [2024-12-16 22:29:31.009809] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.903 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.903 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.903 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:42.162 Running I/O for 10 seconds... 00:23:44.038 3738.00 IOPS, 14.60 MiB/s [2024-12-16T21:29:35.115Z] 4035.00 IOPS, 15.76 MiB/s [2024-12-16T21:29:36.050Z] 4336.33 IOPS, 16.94 MiB/s [2024-12-16T21:29:36.986Z] 4531.50 IOPS, 17.70 MiB/s [2024-12-16T21:29:37.922Z] 4578.60 IOPS, 17.89 MiB/s [2024-12-16T21:29:38.858Z] 4632.00 IOPS, 18.09 MiB/s [2024-12-16T21:29:39.793Z] 4657.43 IOPS, 18.19 MiB/s [2024-12-16T21:29:40.729Z] 4711.38 IOPS, 18.40 MiB/s [2024-12-16T21:29:42.107Z] 4743.44 IOPS, 18.53 MiB/s [2024-12-16T21:29:42.107Z] 4767.50 IOPS, 18.62 MiB/s 00:23:52.406 Latency(us) 00:23:52.406 [2024-12-16T21:29:42.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.406 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:52.406 Verification LBA range: start 0x0 length 0x2000 00:23:52.406 TLSTESTn1 : 10.02 4772.03 18.64 0.00 0.00 26785.28 6584.81 43940.33 00:23:52.406 [2024-12-16T21:29:42.107Z] =================================================================================================================== 00:23:52.406 [2024-12-16T21:29:42.107Z] Total : 4772.03 18.64 0.00 0.00 26785.28 6584.81 43940.33 00:23:52.406 { 00:23:52.406 "results": [ 00:23:52.406 { 00:23:52.406 "job": "TLSTESTn1", 00:23:52.406 "core_mask": "0x4", 00:23:52.406 "workload": "verify", 00:23:52.406 "status": "finished", 00:23:52.406 "verify_range": { 00:23:52.406 "start": 0, 00:23:52.406 "length": 8192 00:23:52.406 }, 00:23:52.406 "queue_depth": 128, 00:23:52.406 "io_size": 4096, 00:23:52.406 "runtime": 10.017325, 00:23:52.406 "iops": 4772.032453773837, 00:23:52.406 "mibps": 18.64075177255405, 00:23:52.406 "io_failed": 0, 00:23:52.406 "io_timeout": 0, 00:23:52.406 "avg_latency_us": 26785.284710124786, 00:23:52.406 "min_latency_us": 6584.8076190476195, 00:23:52.406 "max_latency_us": 43940.32761904762 00:23:52.406 } 00:23:52.406 ], 00:23:52.406 "core_count": 1 00:23:52.406 } 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 355029 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 355029 ']' 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 355029 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355029 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355029' 00:23:52.406 killing process with pid 355029 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 355029 00:23:52.406 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.406 00:23:52.406 Latency(us) 00:23:52.406 [2024-12-16T21:29:42.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.406 [2024-12-16T21:29:42.107Z] =================================================================================================================== 00:23:52.406 [2024-12-16T21:29:42.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 355029 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 354789 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354789 ']' 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354789 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354789 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354789' 00:23:52.406 killing process with pid 354789 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354789 00:23:52.406 22:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354789 00:23:52.665 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=356816 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 356816 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 356816 ']' 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.666 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.666 [2024-12-16 22:29:42.212291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:52.666 [2024-12-16 22:29:42.212336] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.666 [2024-12-16 22:29:42.288711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.666 [2024-12-16 22:29:42.309983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.666 [2024-12-16 22:29:42.310018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.666 [2024-12-16 22:29:42.310024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.666 [2024-12-16 22:29:42.310030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.666 [2024-12-16 22:29:42.310035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.666 [2024-12-16 22:29:42.310547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.924 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.924 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.924 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.vWxpl3eHQY 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vWxpl3eHQY 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.925 [2024-12-16 22:29:42.606524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.925 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.183 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.442 [2024-12-16 22:29:42.963448] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.442 [2024-12-16 22:29:42.963661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.442 22:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:53.701 malloc0 00:23:53.701 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:53.701 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:53.959 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=357067 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 357067 /var/tmp/bdevperf.sock 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357067 ']' 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.218 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.218 [2024-12-16 22:29:43.748328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:54.218 [2024-12-16 22:29:43.748375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357067 ] 00:23:54.218 [2024-12-16 22:29:43.821911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.218 [2024-12-16 22:29:43.843768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.477 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.477 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.477 22:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:54.477 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:54.736 [2024-12-16 22:29:44.274428] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.736 nvme0n1 00:23:54.736 22:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.994 Running I/O for 1 seconds... 00:23:55.934 5259.00 IOPS, 20.54 MiB/s 00:23:55.934 Latency(us) 00:23:55.934 [2024-12-16T21:29:45.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.934 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:55.934 Verification LBA range: start 0x0 length 0x2000 00:23:55.934 nvme0n1 : 1.02 5295.20 20.68 0.00 0.00 23999.84 4712.35 38447.79 00:23:55.934 [2024-12-16T21:29:45.635Z] =================================================================================================================== 00:23:55.934 [2024-12-16T21:29:45.635Z] Total : 5295.20 20.68 0.00 0.00 23999.84 4712.35 38447.79 00:23:55.934 { 00:23:55.934 "results": [ 00:23:55.934 { 00:23:55.934 "job": "nvme0n1", 00:23:55.934 "core_mask": "0x2", 00:23:55.934 "workload": "verify", 00:23:55.934 "status": "finished", 00:23:55.934 "verify_range": { 00:23:55.934 "start": 0, 00:23:55.934 "length": 8192 00:23:55.934 }, 00:23:55.934 "queue_depth": 128, 00:23:55.934 "io_size": 4096, 00:23:55.934 "runtime": 1.017337, 00:23:55.934 "iops": 5295.197166720565, 00:23:55.934 "mibps": 20.684363932502208, 00:23:55.934 "io_failed": 0, 00:23:55.934 "io_timeout": 0, 00:23:55.934 "avg_latency_us": 23999.840847189444, 00:23:55.934 "min_latency_us": 4712.350476190476, 00:23:55.934 "max_latency_us": 38447.78666666667 00:23:55.934 } 00:23:55.934 ], 00:23:55.934 "core_count": 1 00:23:55.934 } 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 357067 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357067 ']' 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357067 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357067 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357067' 00:23:55.934 killing process with pid 357067 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357067 00:23:55.934 Received shutdown signal, test time was about 1.000000 seconds 00:23:55.934 00:23:55.934 Latency(us) 00:23:55.934 [2024-12-16T21:29:45.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.934 [2024-12-16T21:29:45.635Z] =================================================================================================================== 00:23:55.934 [2024-12-16T21:29:45.635Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:55.934 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357067 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 356816 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 356816 ']' 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 356816 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 356816 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 356816' 00:23:56.193 killing process with pid 356816 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 356816 00:23:56.193 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 356816 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357439 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357439 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357439 ']' 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.452 22:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.452 [2024-12-16 22:29:45.981095] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:56.452 [2024-12-16 22:29:45.981140] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.452 [2024-12-16 22:29:46.058650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.452 [2024-12-16 22:29:46.079862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.452 [2024-12-16 22:29:46.079897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.452 [2024-12-16 22:29:46.079904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.452 [2024-12-16 22:29:46.079909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.452 [2024-12-16 22:29:46.079914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.452 [2024-12-16 22:29:46.080397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.711 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.712 [2024-12-16 22:29:46.211688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.712 malloc0 00:23:56.712 [2024-12-16 22:29:46.239710] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.712 [2024-12-16 22:29:46.239921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=357549 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 357549 /var/tmp/bdevperf.sock 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357549 ']' 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.712 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.712 [2024-12-16 22:29:46.314123] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:56.712 [2024-12-16 22:29:46.314164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357549 ] 00:23:56.712 [2024-12-16 22:29:46.387564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.712 [2024-12-16 22:29:46.410081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.971 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.971 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.971 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vWxpl3eHQY 00:23:57.230 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:57.230 [2024-12-16 22:29:46.840855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.230 nvme0n1 00:23:57.230 22:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.489 Running I/O for 1 seconds... 00:23:58.425 5010.00 IOPS, 19.57 MiB/s 00:23:58.425 Latency(us) 00:23:58.425 [2024-12-16T21:29:48.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.425 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.425 Verification LBA range: start 0x0 length 0x2000 00:23:58.425 nvme0n1 : 1.01 5072.57 19.81 0.00 0.00 25066.48 5367.71 31082.79 00:23:58.425 [2024-12-16T21:29:48.126Z] =================================================================================================================== 00:23:58.425 [2024-12-16T21:29:48.126Z] Total : 5072.57 19.81 0.00 0.00 25066.48 5367.71 31082.79 00:23:58.425 { 00:23:58.425 "results": [ 00:23:58.425 { 00:23:58.425 "job": "nvme0n1", 00:23:58.425 "core_mask": "0x2", 00:23:58.425 "workload": "verify", 00:23:58.425 "status": "finished", 00:23:58.425 "verify_range": { 00:23:58.425 "start": 0, 00:23:58.425 "length": 8192 00:23:58.425 }, 00:23:58.425 "queue_depth": 128, 00:23:58.425 "io_size": 4096, 00:23:58.425 "runtime": 1.013096, 00:23:58.425 "iops": 5072.569628149751, 00:23:58.425 "mibps": 19.814725109959966, 00:23:58.425 "io_failed": 0, 00:23:58.425 "io_timeout": 0, 00:23:58.425 "avg_latency_us": 25066.47701331554, 00:23:58.425 "min_latency_us": 5367.710476190477, 00:23:58.425 "max_latency_us": 31082.788571428573 00:23:58.425 } 00:23:58.425 ], 00:23:58.425 "core_count": 1 00:23:58.425 } 00:23:58.425 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:58.425 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.425 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.684 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.684 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:58.684 "subsystems": [ 00:23:58.684 { 00:23:58.684 "subsystem": "keyring", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "keyring_file_add_key", 00:23:58.684 "params": { 00:23:58.684 "name": "key0", 00:23:58.684 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:58.684 } 00:23:58.684 } 00:23:58.684 ] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "iobuf", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "iobuf_set_options", 00:23:58.684 "params": { 00:23:58.684 "small_pool_count": 8192, 00:23:58.684 "large_pool_count": 1024, 00:23:58.684 "small_bufsize": 8192, 00:23:58.684 "large_bufsize": 135168, 00:23:58.684 "enable_numa": false 00:23:58.684 } 00:23:58.684 } 00:23:58.684 ] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "sock", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "sock_set_default_impl", 00:23:58.684 "params": { 00:23:58.684 "impl_name": "posix" 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "sock_impl_set_options", 00:23:58.684 "params": { 00:23:58.684 "impl_name": "ssl", 00:23:58.684 "recv_buf_size": 4096, 00:23:58.684 "send_buf_size": 4096, 00:23:58.684 "enable_recv_pipe": true, 00:23:58.684 "enable_quickack": false, 00:23:58.684 "enable_placement_id": 0, 00:23:58.684 "enable_zerocopy_send_server": true, 00:23:58.684 "enable_zerocopy_send_client": false, 00:23:58.684 "zerocopy_threshold": 0, 00:23:58.684 "tls_version": 0, 00:23:58.684 "enable_ktls": false 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "sock_impl_set_options", 00:23:58.684 "params": { 00:23:58.684 "impl_name": "posix", 00:23:58.684 "recv_buf_size": 2097152, 00:23:58.684 "send_buf_size": 2097152, 00:23:58.684 "enable_recv_pipe": true, 00:23:58.684 "enable_quickack": false, 00:23:58.684 "enable_placement_id": 0, 00:23:58.684 "enable_zerocopy_send_server": true, 00:23:58.684 "enable_zerocopy_send_client": false, 00:23:58.684 "zerocopy_threshold": 0, 00:23:58.684 "tls_version": 0, 00:23:58.684 "enable_ktls": false 00:23:58.684 } 00:23:58.684 } 00:23:58.684 ] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "vmd", 00:23:58.684 "config": [] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "accel", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "accel_set_options", 00:23:58.684 "params": { 00:23:58.684 "small_cache_size": 128, 00:23:58.684 "large_cache_size": 16, 00:23:58.684 "task_count": 2048, 00:23:58.684 "sequence_count": 2048, 00:23:58.684 "buf_count": 2048 00:23:58.684 } 00:23:58.684 } 00:23:58.684 ] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "bdev", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "bdev_set_options", 00:23:58.684 "params": { 00:23:58.684 "bdev_io_pool_size": 65535, 00:23:58.684 "bdev_io_cache_size": 256, 00:23:58.684 "bdev_auto_examine": true, 00:23:58.684 "iobuf_small_cache_size": 128, 00:23:58.684 "iobuf_large_cache_size": 16 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "bdev_raid_set_options", 00:23:58.684 "params": { 00:23:58.684 "process_window_size_kb": 1024, 00:23:58.684 "process_max_bandwidth_mb_sec": 0 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "bdev_iscsi_set_options", 00:23:58.684 "params": { 00:23:58.684 "timeout_sec": 30 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "bdev_nvme_set_options", 00:23:58.684 "params": { 00:23:58.684 "action_on_timeout": "none", 00:23:58.684 "timeout_us": 0, 00:23:58.684 "timeout_admin_us": 0, 00:23:58.684 "keep_alive_timeout_ms": 10000, 00:23:58.684 "arbitration_burst": 0, 00:23:58.684 "low_priority_weight": 0, 00:23:58.684 "medium_priority_weight": 0, 00:23:58.684 "high_priority_weight": 0, 00:23:58.684 "nvme_adminq_poll_period_us": 10000, 00:23:58.684 "nvme_ioq_poll_period_us": 0, 00:23:58.684 "io_queue_requests": 0, 00:23:58.684 "delay_cmd_submit": true, 00:23:58.684 "transport_retry_count": 4, 00:23:58.684 "bdev_retry_count": 3, 00:23:58.684 "transport_ack_timeout": 0, 00:23:58.684 "ctrlr_loss_timeout_sec": 0, 00:23:58.684 "reconnect_delay_sec": 0, 00:23:58.684 "fast_io_fail_timeout_sec": 0, 00:23:58.684 "disable_auto_failback": false, 00:23:58.684 "generate_uuids": false, 00:23:58.684 "transport_tos": 0, 00:23:58.684 "nvme_error_stat": false, 00:23:58.684 "rdma_srq_size": 0, 00:23:58.684 "io_path_stat": false, 00:23:58.684 "allow_accel_sequence": false, 00:23:58.684 "rdma_max_cq_size": 0, 00:23:58.684 "rdma_cm_event_timeout_ms": 0, 00:23:58.684 "dhchap_digests": [ 00:23:58.684 "sha256", 00:23:58.684 "sha384", 00:23:58.684 "sha512" 00:23:58.684 ], 00:23:58.684 "dhchap_dhgroups": [ 00:23:58.684 "null", 00:23:58.684 "ffdhe2048", 00:23:58.684 "ffdhe3072", 00:23:58.684 "ffdhe4096", 00:23:58.684 "ffdhe6144", 00:23:58.684 "ffdhe8192" 00:23:58.684 ], 00:23:58.684 "rdma_umr_per_io": false 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "bdev_nvme_set_hotplug", 00:23:58.684 "params": { 00:23:58.684 "period_us": 100000, 00:23:58.684 "enable": false 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "bdev_malloc_create", 00:23:58.684 "params": { 00:23:58.684 "name": "malloc0", 00:23:58.684 "num_blocks": 8192, 00:23:58.684 "block_size": 4096, 00:23:58.684 "physical_block_size": 4096, 00:23:58.684 "uuid": "214ad5ed-8964-4acf-a9cc-2c0501d4148a", 00:23:58.684 "optimal_io_boundary": 0, 00:23:58.684 "md_size": 0, 00:23:58.684 "dif_type": 0, 00:23:58.684 "dif_is_head_of_md": false, 00:23:58.684 "dif_pi_format": 0 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "bdev_wait_for_examine" 00:23:58.684 } 00:23:58.684 ] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "nbd", 00:23:58.684 "config": [] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "scheduler", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "framework_set_scheduler", 00:23:58.684 "params": { 00:23:58.684 "name": "static" 00:23:58.684 } 00:23:58.684 } 00:23:58.684 ] 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "subsystem": "nvmf", 00:23:58.684 "config": [ 00:23:58.684 { 00:23:58.684 "method": "nvmf_set_config", 00:23:58.684 "params": { 00:23:58.684 "discovery_filter": "match_any", 00:23:58.684 "admin_cmd_passthru": { 00:23:58.684 "identify_ctrlr": false 00:23:58.684 }, 00:23:58.684 "dhchap_digests": [ 00:23:58.684 "sha256", 00:23:58.684 "sha384", 00:23:58.684 "sha512" 00:23:58.684 ], 00:23:58.684 "dhchap_dhgroups": [ 00:23:58.684 "null", 00:23:58.684 "ffdhe2048", 00:23:58.684 "ffdhe3072", 00:23:58.684 "ffdhe4096", 00:23:58.684 "ffdhe6144", 00:23:58.684 "ffdhe8192" 00:23:58.684 ] 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "nvmf_set_max_subsystems", 00:23:58.684 "params": { 00:23:58.684 "max_subsystems": 1024 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "nvmf_set_crdt", 00:23:58.684 "params": { 00:23:58.684 "crdt1": 0, 00:23:58.684 "crdt2": 0, 00:23:58.684 "crdt3": 0 00:23:58.684 } 00:23:58.684 }, 00:23:58.684 { 00:23:58.684 "method": "nvmf_create_transport", 00:23:58.685 "params": { 00:23:58.685 "trtype": "TCP", 00:23:58.685 "max_queue_depth": 128, 00:23:58.685 "max_io_qpairs_per_ctrlr": 127, 00:23:58.685 "in_capsule_data_size": 4096, 00:23:58.685 "max_io_size": 131072, 00:23:58.685 "io_unit_size": 131072, 00:23:58.685 "max_aq_depth": 128, 00:23:58.685 "num_shared_buffers": 511, 00:23:58.685 "buf_cache_size": 4294967295, 00:23:58.685 "dif_insert_or_strip": false, 00:23:58.685 "zcopy": false, 00:23:58.685 "c2h_success": false, 00:23:58.685 "sock_priority": 0, 00:23:58.685 "abort_timeout_sec": 1, 00:23:58.685 "ack_timeout": 0, 00:23:58.685 "data_wr_pool_size": 0 00:23:58.685 } 00:23:58.685 }, 00:23:58.685 { 00:23:58.685 "method": "nvmf_create_subsystem", 00:23:58.685 "params": { 00:23:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.685 "allow_any_host": false, 00:23:58.685 "serial_number": "00000000000000000000", 00:23:58.685 "model_number": "SPDK bdev Controller", 00:23:58.685 "max_namespaces": 32, 00:23:58.685 "min_cntlid": 1, 00:23:58.685 "max_cntlid": 65519, 00:23:58.685 "ana_reporting": false 00:23:58.685 } 00:23:58.685 }, 00:23:58.685 { 00:23:58.685 "method": "nvmf_subsystem_add_host", 00:23:58.685 "params": { 00:23:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.685 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.685 "psk": "key0" 00:23:58.685 } 00:23:58.685 }, 00:23:58.685 { 00:23:58.685 "method": "nvmf_subsystem_add_ns", 00:23:58.685 "params": { 00:23:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.685 "namespace": { 00:23:58.685 "nsid": 1, 00:23:58.685 "bdev_name": "malloc0", 00:23:58.685 "nguid": "214AD5ED89644ACFA9CC2C0501D4148A", 00:23:58.685 "uuid": "214ad5ed-8964-4acf-a9cc-2c0501d4148a", 00:23:58.685 "no_auto_visible": false 00:23:58.685 } 00:23:58.685 } 00:23:58.685 }, 00:23:58.685 { 00:23:58.685 "method": "nvmf_subsystem_add_listener", 00:23:58.685 "params": { 00:23:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.685 "listen_address": { 00:23:58.685 "trtype": "TCP", 00:23:58.685 "adrfam": "IPv4", 00:23:58.685 "traddr": "10.0.0.2", 00:23:58.685 "trsvcid": "4420" 00:23:58.685 }, 00:23:58.685 "secure_channel": false, 00:23:58.685 "sock_impl": "ssl" 00:23:58.685 } 00:23:58.685 } 00:23:58.685 ] 00:23:58.685 } 00:23:58.685 ] 00:23:58.685 }' 00:23:58.685 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:58.944 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:58.944 "subsystems": [ 00:23:58.944 { 00:23:58.944 "subsystem": "keyring", 00:23:58.944 "config": [ 00:23:58.944 { 00:23:58.944 "method": "keyring_file_add_key", 00:23:58.944 "params": { 00:23:58.944 "name": "key0", 00:23:58.944 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:58.944 } 00:23:58.944 } 00:23:58.944 ] 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "subsystem": "iobuf", 00:23:58.944 "config": [ 00:23:58.944 { 00:23:58.944 "method": "iobuf_set_options", 00:23:58.944 "params": { 00:23:58.944 "small_pool_count": 8192, 00:23:58.944 "large_pool_count": 1024, 00:23:58.944 "small_bufsize": 8192, 00:23:58.944 "large_bufsize": 135168, 00:23:58.944 "enable_numa": false 00:23:58.944 } 00:23:58.944 } 00:23:58.944 ] 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "subsystem": "sock", 00:23:58.944 "config": [ 00:23:58.944 { 00:23:58.944 "method": "sock_set_default_impl", 00:23:58.944 "params": { 00:23:58.944 "impl_name": "posix" 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "method": "sock_impl_set_options", 00:23:58.944 "params": { 00:23:58.944 "impl_name": "ssl", 00:23:58.944 "recv_buf_size": 4096, 00:23:58.944 "send_buf_size": 4096, 00:23:58.944 "enable_recv_pipe": true, 00:23:58.944 "enable_quickack": false, 00:23:58.944 "enable_placement_id": 0, 00:23:58.944 "enable_zerocopy_send_server": true, 00:23:58.944 "enable_zerocopy_send_client": false, 00:23:58.944 "zerocopy_threshold": 0, 00:23:58.944 "tls_version": 0, 00:23:58.944 "enable_ktls": false 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "method": "sock_impl_set_options", 00:23:58.944 "params": { 00:23:58.944 "impl_name": "posix", 00:23:58.944 "recv_buf_size": 2097152, 00:23:58.944 "send_buf_size": 2097152, 00:23:58.944 "enable_recv_pipe": true, 00:23:58.944 "enable_quickack": false, 00:23:58.944 "enable_placement_id": 0, 00:23:58.944 "enable_zerocopy_send_server": true, 00:23:58.944 "enable_zerocopy_send_client": false, 00:23:58.944 "zerocopy_threshold": 0, 00:23:58.944 "tls_version": 0, 00:23:58.944 "enable_ktls": false 00:23:58.944 } 00:23:58.944 } 00:23:58.944 ] 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "subsystem": "vmd", 00:23:58.944 "config": [] 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "subsystem": "accel", 00:23:58.944 "config": [ 00:23:58.944 { 00:23:58.944 "method": "accel_set_options", 00:23:58.944 "params": { 00:23:58.944 "small_cache_size": 128, 00:23:58.944 "large_cache_size": 16, 00:23:58.944 "task_count": 2048, 00:23:58.944 "sequence_count": 2048, 00:23:58.944 "buf_count": 2048 00:23:58.944 } 00:23:58.944 } 00:23:58.944 ] 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "subsystem": "bdev", 00:23:58.944 "config": [ 00:23:58.944 { 00:23:58.944 "method": "bdev_set_options", 00:23:58.944 "params": { 00:23:58.944 "bdev_io_pool_size": 65535, 00:23:58.944 "bdev_io_cache_size": 256, 00:23:58.944 "bdev_auto_examine": true, 00:23:58.944 "iobuf_small_cache_size": 128, 00:23:58.944 "iobuf_large_cache_size": 16 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "method": "bdev_raid_set_options", 00:23:58.944 "params": { 00:23:58.944 "process_window_size_kb": 1024, 00:23:58.944 "process_max_bandwidth_mb_sec": 0 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "method": "bdev_iscsi_set_options", 00:23:58.944 "params": { 00:23:58.944 "timeout_sec": 30 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "method": "bdev_nvme_set_options", 00:23:58.944 "params": { 00:23:58.944 "action_on_timeout": "none", 00:23:58.944 "timeout_us": 0, 00:23:58.944 "timeout_admin_us": 0, 00:23:58.944 "keep_alive_timeout_ms": 10000, 00:23:58.944 "arbitration_burst": 0, 00:23:58.944 "low_priority_weight": 0, 00:23:58.944 "medium_priority_weight": 0, 00:23:58.944 "high_priority_weight": 0, 00:23:58.944 "nvme_adminq_poll_period_us": 10000, 00:23:58.944 "nvme_ioq_poll_period_us": 0, 00:23:58.944 "io_queue_requests": 512, 00:23:58.944 "delay_cmd_submit": true, 00:23:58.944 "transport_retry_count": 4, 00:23:58.944 "bdev_retry_count": 3, 00:23:58.944 "transport_ack_timeout": 0, 00:23:58.944 "ctrlr_loss_timeout_sec": 0, 00:23:58.944 "reconnect_delay_sec": 0, 00:23:58.944 "fast_io_fail_timeout_sec": 0, 00:23:58.944 "disable_auto_failback": false, 00:23:58.944 "generate_uuids": false, 00:23:58.944 "transport_tos": 0, 00:23:58.944 "nvme_error_stat": false, 00:23:58.944 "rdma_srq_size": 0, 00:23:58.944 "io_path_stat": false, 00:23:58.944 "allow_accel_sequence": false, 00:23:58.944 "rdma_max_cq_size": 0, 00:23:58.944 "rdma_cm_event_timeout_ms": 0, 00:23:58.944 "dhchap_digests": [ 00:23:58.944 "sha256", 00:23:58.944 "sha384", 00:23:58.944 "sha512" 00:23:58.944 ], 00:23:58.944 "dhchap_dhgroups": [ 00:23:58.944 "null", 00:23:58.944 "ffdhe2048", 00:23:58.944 "ffdhe3072", 00:23:58.944 "ffdhe4096", 00:23:58.944 "ffdhe6144", 00:23:58.944 "ffdhe8192" 00:23:58.944 ], 00:23:58.944 "rdma_umr_per_io": false 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.944 "method": "bdev_nvme_attach_controller", 00:23:58.944 "params": { 00:23:58.944 "name": "nvme0", 00:23:58.944 "trtype": "TCP", 00:23:58.944 "adrfam": "IPv4", 00:23:58.944 "traddr": "10.0.0.2", 00:23:58.944 "trsvcid": "4420", 00:23:58.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.944 "prchk_reftag": false, 00:23:58.944 "prchk_guard": false, 00:23:58.944 "ctrlr_loss_timeout_sec": 0, 00:23:58.944 "reconnect_delay_sec": 0, 00:23:58.944 "fast_io_fail_timeout_sec": 0, 00:23:58.944 "psk": "key0", 00:23:58.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.944 "hdgst": false, 00:23:58.944 "ddgst": false, 00:23:58.944 "multipath": "multipath" 00:23:58.944 } 00:23:58.944 }, 00:23:58.944 { 00:23:58.945 "method": "bdev_nvme_set_hotplug", 00:23:58.945 "params": { 00:23:58.945 "period_us": 100000, 00:23:58.945 "enable": false 00:23:58.945 } 00:23:58.945 }, 00:23:58.945 { 00:23:58.945 "method": "bdev_enable_histogram", 00:23:58.945 "params": { 00:23:58.945 "name": "nvme0n1", 00:23:58.945 "enable": true 00:23:58.945 } 00:23:58.945 }, 00:23:58.945 { 00:23:58.945 "method": "bdev_wait_for_examine" 00:23:58.945 } 00:23:58.945 ] 00:23:58.945 }, 00:23:58.945 { 00:23:58.945 "subsystem": "nbd", 00:23:58.945 "config": [] 00:23:58.945 } 00:23:58.945 ] 00:23:58.945 }' 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 357549 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357549 ']' 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357549 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357549 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357549' 00:23:58.945 killing process with pid 357549 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357549 00:23:58.945 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.945 00:23:58.945 Latency(us) 00:23:58.945 [2024-12-16T21:29:48.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.945 [2024-12-16T21:29:48.646Z] =================================================================================================================== 00:23:58.945 [2024-12-16T21:29:48.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357549 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 357439 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357439 ']' 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357439 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.945 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357439 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357439' 00:23:59.206 killing process with pid 357439 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357439 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357439 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.206 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:59.206 "subsystems": [ 00:23:59.206 { 00:23:59.206 "subsystem": "keyring", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "keyring_file_add_key", 00:23:59.206 "params": { 00:23:59.206 "name": "key0", 00:23:59.206 "path": "/tmp/tmp.vWxpl3eHQY" 00:23:59.206 } 00:23:59.206 } 00:23:59.206 ] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "iobuf", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "iobuf_set_options", 00:23:59.206 "params": { 00:23:59.206 "small_pool_count": 8192, 00:23:59.206 "large_pool_count": 1024, 00:23:59.206 "small_bufsize": 8192, 00:23:59.206 "large_bufsize": 135168, 00:23:59.206 "enable_numa": false 00:23:59.206 } 00:23:59.206 } 00:23:59.206 ] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "sock", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "sock_set_default_impl", 00:23:59.206 "params": { 00:23:59.206 "impl_name": "posix" 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "sock_impl_set_options", 00:23:59.206 "params": { 00:23:59.206 "impl_name": "ssl", 00:23:59.206 "recv_buf_size": 4096, 00:23:59.206 "send_buf_size": 4096, 00:23:59.206 "enable_recv_pipe": true, 00:23:59.206 "enable_quickack": false, 00:23:59.206 "enable_placement_id": 0, 00:23:59.206 "enable_zerocopy_send_server": true, 00:23:59.206 "enable_zerocopy_send_client": false, 00:23:59.206 "zerocopy_threshold": 0, 00:23:59.206 "tls_version": 0, 00:23:59.206 "enable_ktls": false 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "sock_impl_set_options", 00:23:59.206 "params": { 00:23:59.206 "impl_name": "posix", 00:23:59.206 "recv_buf_size": 2097152, 00:23:59.206 "send_buf_size": 2097152, 00:23:59.206 "enable_recv_pipe": true, 00:23:59.206 "enable_quickack": false, 00:23:59.206 "enable_placement_id": 0, 00:23:59.206 "enable_zerocopy_send_server": true, 00:23:59.206 "enable_zerocopy_send_client": false, 00:23:59.206 "zerocopy_threshold": 0, 00:23:59.206 "tls_version": 0, 00:23:59.206 "enable_ktls": false 00:23:59.206 } 00:23:59.206 } 00:23:59.206 ] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "vmd", 00:23:59.206 "config": [] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "accel", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "accel_set_options", 00:23:59.206 "params": { 00:23:59.206 "small_cache_size": 128, 00:23:59.206 "large_cache_size": 16, 00:23:59.206 "task_count": 2048, 00:23:59.206 "sequence_count": 2048, 00:23:59.206 "buf_count": 2048 00:23:59.206 } 00:23:59.206 } 00:23:59.206 ] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "bdev", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "bdev_set_options", 00:23:59.206 "params": { 00:23:59.206 "bdev_io_pool_size": 65535, 00:23:59.206 "bdev_io_cache_size": 256, 00:23:59.206 "bdev_auto_examine": true, 00:23:59.206 "iobuf_small_cache_size": 128, 00:23:59.206 "iobuf_large_cache_size": 16 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "bdev_raid_set_options", 00:23:59.206 "params": { 00:23:59.206 "process_window_size_kb": 1024, 00:23:59.206 "process_max_bandwidth_mb_sec": 0 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "bdev_iscsi_set_options", 00:23:59.206 "params": { 00:23:59.206 "timeout_sec": 30 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "bdev_nvme_set_options", 00:23:59.206 "params": { 00:23:59.206 "action_on_timeout": "none", 00:23:59.206 "timeout_us": 0, 00:23:59.206 "timeout_admin_us": 0, 00:23:59.206 "keep_alive_timeout_ms": 10000, 00:23:59.206 "arbitration_burst": 0, 00:23:59.206 "low_priority_weight": 0, 00:23:59.206 "medium_priority_weight": 0, 00:23:59.206 "high_priority_weight": 0, 00:23:59.206 "nvme_adminq_poll_period_us": 10000, 00:23:59.206 "nvme_ioq_poll_period_us": 0, 00:23:59.206 "io_queue_requests": 0, 00:23:59.206 "delay_cmd_submit": true, 00:23:59.206 "transport_retry_count": 4, 00:23:59.206 "bdev_retry_count": 3, 00:23:59.206 "transport_ack_timeout": 0, 00:23:59.206 "ctrlr_loss_timeout_sec": 0, 00:23:59.206 "reconnect_delay_sec": 0, 00:23:59.206 "fast_io_fail_timeout_sec": 0, 00:23:59.206 "disable_auto_failback": false, 00:23:59.206 "generate_uuids": false, 00:23:59.206 "transport_tos": 0, 00:23:59.206 "nvme_error_stat": false, 00:23:59.206 "rdma_srq_size": 0, 00:23:59.206 "io_path_stat": false, 00:23:59.206 "allow_accel_sequence": false, 00:23:59.206 "rdma_max_cq_size": 0, 00:23:59.206 "rdma_cm_event_timeout_ms": 0, 00:23:59.206 "dhchap_digests": [ 00:23:59.206 "sha256", 00:23:59.206 "sha384", 00:23:59.206 "sha512" 00:23:59.206 ], 00:23:59.206 "dhchap_dhgroups": [ 00:23:59.206 "null", 00:23:59.206 "ffdhe2048", 00:23:59.206 "ffdhe3072", 00:23:59.206 "ffdhe4096", 00:23:59.206 "ffdhe6144", 00:23:59.206 "ffdhe8192" 00:23:59.206 ], 00:23:59.206 "rdma_umr_per_io": false 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "bdev_nvme_set_hotplug", 00:23:59.206 "params": { 00:23:59.206 "period_us": 100000, 00:23:59.206 "enable": false 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "bdev_malloc_create", 00:23:59.206 "params": { 00:23:59.206 "name": "malloc0", 00:23:59.206 "num_blocks": 8192, 00:23:59.206 "block_size": 4096, 00:23:59.206 "physical_block_size": 4096, 00:23:59.206 "uuid": "214ad5ed-8964-4acf-a9cc-2c0501d4148a", 00:23:59.206 "optimal_io_boundary": 0, 00:23:59.206 "md_size": 0, 00:23:59.206 "dif_type": 0, 00:23:59.206 "dif_is_head_of_md": false, 00:23:59.206 "dif_pi_format": 0 00:23:59.206 } 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "method": "bdev_wait_for_examine" 00:23:59.206 } 00:23:59.206 ] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "nbd", 00:23:59.206 "config": [] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "scheduler", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "framework_set_scheduler", 00:23:59.206 "params": { 00:23:59.206 "name": "static" 00:23:59.206 } 00:23:59.206 } 00:23:59.206 ] 00:23:59.206 }, 00:23:59.206 { 00:23:59.206 "subsystem": "nvmf", 00:23:59.206 "config": [ 00:23:59.206 { 00:23:59.206 "method": "nvmf_set_config", 00:23:59.206 "params": { 00:23:59.206 "discovery_filter": "match_any", 00:23:59.206 "admin_cmd_passthru": { 00:23:59.206 "identify_ctrlr": false 00:23:59.206 }, 00:23:59.206 "dhchap_digests": [ 00:23:59.206 "sha256", 00:23:59.206 "sha384", 00:23:59.206 "sha512" 00:23:59.206 ], 00:23:59.206 "dhchap_dhgroups": [ 00:23:59.206 "null", 00:23:59.206 "ffdhe2048", 00:23:59.206 "ffdhe3072", 00:23:59.207 "ffdhe4096", 00:23:59.207 "ffdhe6144", 00:23:59.207 "ffdhe8192" 00:23:59.207 ] 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_set_max_subsystems", 00:23:59.207 "params": { 00:23:59.207 "max_subsystems": 1024 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_set_crdt", 00:23:59.207 "params": { 00:23:59.207 "crdt1": 0, 00:23:59.207 "crdt2": 0, 00:23:59.207 "crdt3": 0 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_create_transport", 00:23:59.207 "params": { 00:23:59.207 "trtype": "TCP", 00:23:59.207 "max_queue_depth": 128, 00:23:59.207 "max_io_qpairs_per_ctrlr": 127, 00:23:59.207 "in_capsule_data_size": 4096, 00:23:59.207 "max_io_size": 131072, 00:23:59.207 "io_unit_size": 131072, 00:23:59.207 "max_aq_depth": 128, 00:23:59.207 "num_shared_buffers": 511, 00:23:59.207 "buf_cache_size": 4294967295, 00:23:59.207 "dif_insert_or_strip": false, 00:23:59.207 "zcopy": false, 00:23:59.207 "c2h_success": false, 00:23:59.207 "sock_priority": 0, 00:23:59.207 "abort_timeout_sec": 1, 00:23:59.207 "ack_timeout": 0, 00:23:59.207 "data_wr_pool_size": 0 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_create_subsystem", 00:23:59.207 "params": { 00:23:59.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.207 "allow_any_host": false, 00:23:59.207 "serial_number": "00000000000000000000", 00:23:59.207 "model_number": "SPDK bdev Controller", 00:23:59.207 "max_namespaces": 32, 00:23:59.207 "min_cntlid": 1, 00:23:59.207 "max_cntlid": 65519, 00:23:59.207 "ana_reporting": false 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_subsystem_add_host", 00:23:59.207 "params": { 00:23:59.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.207 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.207 "psk": "key0" 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_subsystem_add_ns", 00:23:59.207 "params": { 00:23:59.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.207 "namespace": { 00:23:59.207 "nsid": 1, 00:23:59.207 "bdev_name": "malloc0", 00:23:59.207 "nguid": "214AD5ED89644ACFA9CC2C0501D4148A", 00:23:59.207 "uuid": "214ad5ed-8964-4acf-a9cc-2c0501d4148a", 00:23:59.207 "no_auto_visible": false 00:23:59.207 } 00:23:59.207 } 00:23:59.207 }, 00:23:59.207 { 00:23:59.207 "method": "nvmf_subsystem_add_listener", 00:23:59.207 "params": { 00:23:59.207 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.207 "listen_address": { 00:23:59.207 "trtype": "TCP", 00:23:59.207 "adrfam": "IPv4", 00:23:59.207 "traddr": "10.0.0.2", 00:23:59.207 "trsvcid": "4420" 00:23:59.207 }, 00:23:59.207 "secure_channel": false, 00:23:59.207 "sock_impl": "ssl" 00:23:59.207 } 00:23:59.207 } 00:23:59.207 ] 00:23:59.207 } 00:23:59.207 ] 00:23:59.207 }' 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357994 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357994 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357994 ']' 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.207 22:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.207 [2024-12-16 22:29:48.891347] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:59.207 [2024-12-16 22:29:48.891390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.466 [2024-12-16 22:29:48.969161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.466 [2024-12-16 22:29:48.990198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.466 [2024-12-16 22:29:48.990234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.466 [2024-12-16 22:29:48.990242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.466 [2024-12-16 22:29:48.990248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.466 [2024-12-16 22:29:48.990253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.466 [2024-12-16 22:29:48.990776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.725 [2024-12-16 22:29:49.198941] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.725 [2024-12-16 22:29:49.230979] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.725 [2024-12-16 22:29:49.231188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=358038 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 358038 /var/tmp/bdevperf.sock 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358038 ']' 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.293 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:00.293 "subsystems": [ 00:24:00.293 { 00:24:00.293 "subsystem": "keyring", 00:24:00.293 "config": [ 00:24:00.293 { 00:24:00.293 "method": "keyring_file_add_key", 00:24:00.293 "params": { 00:24:00.293 "name": "key0", 00:24:00.293 "path": "/tmp/tmp.vWxpl3eHQY" 00:24:00.293 } 00:24:00.293 } 00:24:00.293 ] 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "subsystem": "iobuf", 00:24:00.293 "config": [ 00:24:00.293 { 00:24:00.293 "method": "iobuf_set_options", 00:24:00.293 "params": { 00:24:00.293 "small_pool_count": 8192, 00:24:00.293 "large_pool_count": 1024, 00:24:00.293 "small_bufsize": 8192, 00:24:00.293 "large_bufsize": 135168, 00:24:00.293 "enable_numa": false 00:24:00.293 } 00:24:00.293 } 00:24:00.293 ] 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "subsystem": "sock", 00:24:00.293 "config": [ 00:24:00.293 { 00:24:00.293 "method": "sock_set_default_impl", 00:24:00.293 "params": { 00:24:00.293 "impl_name": "posix" 00:24:00.293 } 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "method": "sock_impl_set_options", 00:24:00.293 "params": { 00:24:00.293 "impl_name": "ssl", 00:24:00.293 "recv_buf_size": 4096, 00:24:00.293 "send_buf_size": 4096, 00:24:00.293 "enable_recv_pipe": true, 00:24:00.293 "enable_quickack": false, 00:24:00.293 "enable_placement_id": 0, 00:24:00.293 "enable_zerocopy_send_server": true, 00:24:00.293 "enable_zerocopy_send_client": false, 00:24:00.293 "zerocopy_threshold": 0, 00:24:00.293 "tls_version": 0, 00:24:00.293 "enable_ktls": false 00:24:00.293 } 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "method": "sock_impl_set_options", 00:24:00.293 "params": { 00:24:00.293 "impl_name": "posix", 00:24:00.293 "recv_buf_size": 2097152, 00:24:00.293 "send_buf_size": 2097152, 00:24:00.293 "enable_recv_pipe": true, 00:24:00.293 "enable_quickack": false, 00:24:00.293 "enable_placement_id": 0, 00:24:00.293 "enable_zerocopy_send_server": true, 00:24:00.293 "enable_zerocopy_send_client": false, 00:24:00.293 "zerocopy_threshold": 0, 00:24:00.293 "tls_version": 0, 00:24:00.293 "enable_ktls": false 00:24:00.293 } 00:24:00.293 } 00:24:00.293 ] 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "subsystem": "vmd", 00:24:00.293 "config": [] 00:24:00.293 }, 00:24:00.293 { 00:24:00.293 "subsystem": "accel", 00:24:00.293 "config": [ 00:24:00.293 { 00:24:00.293 "method": "accel_set_options", 00:24:00.293 "params": { 00:24:00.293 "small_cache_size": 128, 00:24:00.293 "large_cache_size": 16, 00:24:00.294 "task_count": 2048, 00:24:00.294 "sequence_count": 2048, 00:24:00.294 "buf_count": 2048 00:24:00.294 } 00:24:00.294 } 00:24:00.294 ] 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "subsystem": "bdev", 00:24:00.294 "config": [ 00:24:00.294 { 00:24:00.294 "method": "bdev_set_options", 00:24:00.294 "params": { 00:24:00.294 "bdev_io_pool_size": 65535, 00:24:00.294 "bdev_io_cache_size": 256, 00:24:00.294 "bdev_auto_examine": true, 00:24:00.294 "iobuf_small_cache_size": 128, 00:24:00.294 "iobuf_large_cache_size": 16 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_raid_set_options", 00:24:00.294 "params": { 00:24:00.294 "process_window_size_kb": 1024, 00:24:00.294 "process_max_bandwidth_mb_sec": 0 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_iscsi_set_options", 00:24:00.294 "params": { 00:24:00.294 "timeout_sec": 30 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_nvme_set_options", 00:24:00.294 "params": { 00:24:00.294 "action_on_timeout": "none", 00:24:00.294 "timeout_us": 0, 00:24:00.294 "timeout_admin_us": 0, 00:24:00.294 "keep_alive_timeout_ms": 10000, 00:24:00.294 "arbitration_burst": 0, 00:24:00.294 "low_priority_weight": 0, 00:24:00.294 "medium_priority_weight": 0, 00:24:00.294 "high_priority_weight": 0, 00:24:00.294 "nvme_adminq_poll_period_us": 10000, 00:24:00.294 "nvme_ioq_poll_period_us": 0, 00:24:00.294 "io_queue_requests": 512, 00:24:00.294 "delay_cmd_submit": true, 00:24:00.294 "transport_retry_count": 4, 00:24:00.294 "bdev_retry_count": 3, 00:24:00.294 "transport_ack_timeout": 0, 00:24:00.294 "ctrlr_loss_timeout_sec": 0, 00:24:00.294 "reconnect_delay_sec": 0, 00:24:00.294 "fast_io_fail_timeout_sec": 0, 00:24:00.294 "disable_auto_failback": false, 00:24:00.294 "generate_uuids": false, 00:24:00.294 "transport_tos": 0, 00:24:00.294 "nvme_error_stat": false, 00:24:00.294 "rdma_srq_size": 0, 00:24:00.294 "io_path_stat": false, 00:24:00.294 "allow_accel_sequence": false, 00:24:00.294 "rdma_max_cq_size": 0, 00:24:00.294 "rdma_cm_event_timeout_ms": 0 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.294 , 00:24:00.294 "dhchap_digests": [ 00:24:00.294 "sha256", 00:24:00.294 "sha384", 00:24:00.294 "sha512" 00:24:00.294 ], 00:24:00.294 "dhchap_dhgroups": [ 00:24:00.294 "null", 00:24:00.294 "ffdhe2048", 00:24:00.294 "ffdhe3072", 00:24:00.294 "ffdhe4096", 00:24:00.294 "ffdhe6144", 00:24:00.294 "ffdhe8192" 00:24:00.294 ], 00:24:00.294 "rdma_umr_per_io": false 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_nvme_attach_controller", 00:24:00.294 "params": { 00:24:00.294 "name": "nvme0", 00:24:00.294 "trtype": "TCP", 00:24:00.294 "adrfam": "IPv4", 00:24:00.294 "traddr": "10.0.0.2", 00:24:00.294 "trsvcid": "4420", 00:24:00.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.294 "prchk_reftag": false, 00:24:00.294 "prchk_guard": false, 00:24:00.294 "ctrlr_loss_timeout_sec": 0, 00:24:00.294 "reconnect_delay_sec": 0, 00:24:00.294 "fast_io_fail_timeout_sec": 0, 00:24:00.294 "psk": "key0", 00:24:00.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.294 "hdgst": false, 00:24:00.294 "ddgst": false, 00:24:00.294 "multipath": "multipath" 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_nvme_set_hotplug", 00:24:00.294 "params": { 00:24:00.294 "period_us": 100000, 00:24:00.294 "enable": false 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_enable_histogram", 00:24:00.294 "params": { 00:24:00.294 "name": "nvme0n1", 00:24:00.294 "enable": true 00:24:00.294 } 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "method": "bdev_wait_for_examine" 00:24:00.294 } 00:24:00.294 ] 00:24:00.294 }, 00:24:00.294 { 00:24:00.294 "subsystem": "nbd", 00:24:00.294 "config": [] 00:24:00.294 } 00:24:00.294 ] 00:24:00.294 }' 00:24:00.294 22:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.294 [2024-12-16 22:29:49.805116] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:00.294 [2024-12-16 22:29:49.805161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358038 ] 00:24:00.294 [2024-12-16 22:29:49.878917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.294 [2024-12-16 22:29:49.900698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.553 [2024-12-16 22:29:50.052938] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.120 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.120 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.120 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:01.120 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:01.120 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.378 22:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.378 Running I/O for 1 seconds... 00:24:02.314 4335.00 IOPS, 16.93 MiB/s 00:24:02.314 Latency(us) 00:24:02.314 [2024-12-16T21:29:52.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.314 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:02.314 Verification LBA range: start 0x0 length 0x2000 00:24:02.314 nvme0n1 : 1.01 4404.99 17.21 0.00 0.00 28870.89 5180.46 33454.57 00:24:02.314 [2024-12-16T21:29:52.015Z] =================================================================================================================== 00:24:02.314 [2024-12-16T21:29:52.015Z] Total : 4404.99 17.21 0.00 0.00 28870.89 5180.46 33454.57 00:24:02.314 { 00:24:02.314 "results": [ 00:24:02.314 { 00:24:02.314 "job": "nvme0n1", 00:24:02.314 "core_mask": "0x2", 00:24:02.314 "workload": "verify", 00:24:02.314 "status": "finished", 00:24:02.314 "verify_range": { 00:24:02.314 "start": 0, 00:24:02.314 "length": 8192 00:24:02.314 }, 00:24:02.314 "queue_depth": 128, 00:24:02.314 "io_size": 4096, 00:24:02.314 "runtime": 1.013169, 00:24:02.314 "iops": 4404.990677764519, 00:24:02.314 "mibps": 17.206994835017653, 00:24:02.314 "io_failed": 0, 00:24:02.314 "io_timeout": 0, 00:24:02.314 "avg_latency_us": 28870.893505329535, 00:24:02.314 "min_latency_us": 5180.464761904762, 00:24:02.314 "max_latency_us": 33454.56761904762 00:24:02.314 } 00:24:02.314 ], 00:24:02.314 "core_count": 1 00:24:02.314 } 00:24:02.314 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:02.314 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:02.314 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:02.314 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:02.314 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:02.314 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:02.315 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:02.315 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:02.315 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:02.315 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:02.315 22:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:02.315 nvmf_trace.0 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 358038 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358038 ']' 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358038 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358038 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358038' 00:24:02.573 killing process with pid 358038 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358038 00:24:02.573 Received shutdown signal, test time was about 1.000000 seconds 00:24:02.573 00:24:02.573 Latency(us) 00:24:02.573 [2024-12-16T21:29:52.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.573 [2024-12-16T21:29:52.274Z] =================================================================================================================== 00:24:02.573 [2024-12-16T21:29:52.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358038 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.573 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.573 rmmod nvme_tcp 00:24:02.573 rmmod nvme_fabrics 00:24:02.833 rmmod nvme_keyring 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 357994 ']' 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 357994 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357994 ']' 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357994 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357994 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357994' 00:24:02.833 killing process with pid 357994 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357994 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357994 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.833 22:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.A5Q3i5TvYu /tmp/tmp.6UpFnjJv18 /tmp/tmp.vWxpl3eHQY 00:24:05.368 00:24:05.368 real 1m18.649s 00:24:05.368 user 2m0.632s 00:24:05.368 sys 0m29.916s 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.368 ************************************ 00:24:05.368 END TEST nvmf_tls 00:24:05.368 ************************************ 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.368 ************************************ 00:24:05.368 START TEST nvmf_fips 00:24:05.368 ************************************ 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:05.368 * Looking for test storage... 00:24:05.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:05.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.368 --rc genhtml_branch_coverage=1 00:24:05.368 --rc genhtml_function_coverage=1 00:24:05.368 --rc genhtml_legend=1 00:24:05.368 --rc geninfo_all_blocks=1 00:24:05.368 --rc geninfo_unexecuted_blocks=1 00:24:05.368 00:24:05.368 ' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:05.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.368 --rc genhtml_branch_coverage=1 00:24:05.368 --rc genhtml_function_coverage=1 00:24:05.368 --rc genhtml_legend=1 00:24:05.368 --rc geninfo_all_blocks=1 00:24:05.368 --rc geninfo_unexecuted_blocks=1 00:24:05.368 00:24:05.368 ' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:05.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.368 --rc genhtml_branch_coverage=1 00:24:05.368 --rc genhtml_function_coverage=1 00:24:05.368 --rc genhtml_legend=1 00:24:05.368 --rc geninfo_all_blocks=1 00:24:05.368 --rc geninfo_unexecuted_blocks=1 00:24:05.368 00:24:05.368 ' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:05.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.368 --rc genhtml_branch_coverage=1 00:24:05.368 --rc genhtml_function_coverage=1 00:24:05.368 --rc genhtml_legend=1 00:24:05.368 --rc geninfo_all_blocks=1 00:24:05.368 --rc geninfo_unexecuted_blocks=1 00:24:05.368 00:24:05.368 ' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.368 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:05.369 22:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:05.369 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:05.370 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:05.370 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:05.370 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:05.629 Error setting digest 00:24:05.629 4062F137937F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:05.629 4062F137937F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:05.629 22:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.194 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.194 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.194 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.194 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.194 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.194 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:12.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:12.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:12.195 Found net devices under 0000:af:00.0: cvl_0_0 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:12.195 Found net devices under 0000:af:00.1: cvl_0_1 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.195 22:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:24:12.195 00:24:12.195 --- 10.0.0.2 ping statistics --- 00:24:12.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.195 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:24:12.195 00:24:12.195 --- 10.0.0.1 ping statistics --- 00:24:12.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.195 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.195 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=362054 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 362054 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362054 ']' 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.196 [2024-12-16 22:30:01.123073] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:12.196 [2024-12-16 22:30:01.123122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.196 [2024-12-16 22:30:01.197968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.196 [2024-12-16 22:30:01.218695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.196 [2024-12-16 22:30:01.218730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.196 [2024-12-16 22:30:01.218737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.196 [2024-12-16 22:30:01.218743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.196 [2024-12-16 22:30:01.218749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.196 [2024-12-16 22:30:01.219239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.OVr 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.OVr 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.OVr 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.OVr 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.196 [2024-12-16 22:30:01.538900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.196 [2024-12-16 22:30:01.554908] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.196 [2024-12-16 22:30:01.555099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.196 malloc0 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=362326 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 362326 /var/tmp/bdevperf.sock 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362326 ']' 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.196 [2024-12-16 22:30:01.682232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:12.196 [2024-12-16 22:30:01.682287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362326 ] 00:24:12.196 [2024-12-16 22:30:01.753038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.196 [2024-12-16 22:30:01.775165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.196 22:30:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.OVr 00:24:12.454 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.712 [2024-12-16 22:30:02.226609] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.712 TLSTESTn1 00:24:12.712 22:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.712 Running I/O for 10 seconds... 00:24:15.025 5353.00 IOPS, 20.91 MiB/s [2024-12-16T21:30:05.665Z] 4758.50 IOPS, 18.59 MiB/s [2024-12-16T21:30:06.601Z] 4842.00 IOPS, 18.91 MiB/s [2024-12-16T21:30:07.537Z] 4894.00 IOPS, 19.12 MiB/s [2024-12-16T21:30:08.474Z] 4925.60 IOPS, 19.24 MiB/s [2024-12-16T21:30:09.850Z] 4948.50 IOPS, 19.33 MiB/s [2024-12-16T21:30:10.786Z] 4961.57 IOPS, 19.38 MiB/s [2024-12-16T21:30:11.722Z] 4973.50 IOPS, 19.43 MiB/s [2024-12-16T21:30:12.658Z] 4947.67 IOPS, 19.33 MiB/s [2024-12-16T21:30:12.658Z] 4924.90 IOPS, 19.24 MiB/s 00:24:22.957 Latency(us) 00:24:22.957 [2024-12-16T21:30:12.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.957 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:22.957 Verification LBA range: start 0x0 length 0x2000 00:24:22.957 TLSTESTn1 : 10.02 4929.02 19.25 0.00 0.00 25931.46 5180.46 36700.16 00:24:22.957 [2024-12-16T21:30:12.658Z] =================================================================================================================== 00:24:22.957 [2024-12-16T21:30:12.658Z] Total : 4929.02 19.25 0.00 0.00 25931.46 5180.46 36700.16 00:24:22.957 { 00:24:22.957 "results": [ 00:24:22.957 { 00:24:22.957 "job": "TLSTESTn1", 00:24:22.957 "core_mask": "0x4", 00:24:22.957 "workload": "verify", 00:24:22.957 "status": "finished", 00:24:22.957 "verify_range": { 00:24:22.957 "start": 0, 00:24:22.957 "length": 8192 00:24:22.957 }, 00:24:22.957 "queue_depth": 128, 00:24:22.957 "io_size": 4096, 00:24:22.957 "runtime": 10.017606, 00:24:22.957 "iops": 4929.021963930304, 00:24:22.957 "mibps": 19.25399204660275, 00:24:22.957 "io_failed": 0, 00:24:22.957 "io_timeout": 0, 00:24:22.957 "avg_latency_us": 25931.45901058619, 00:24:22.957 "min_latency_us": 5180.464761904762, 00:24:22.957 "max_latency_us": 36700.16 00:24:22.957 } 00:24:22.957 ], 00:24:22.957 "core_count": 1 00:24:22.957 } 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:22.957 nvmf_trace.0 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 362326 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362326 ']' 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362326 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362326 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362326' 00:24:22.957 killing process with pid 362326 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362326 00:24:22.957 Received shutdown signal, test time was about 10.000000 seconds 00:24:22.957 00:24:22.957 Latency(us) 00:24:22.957 [2024-12-16T21:30:12.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.957 [2024-12-16T21:30:12.658Z] =================================================================================================================== 00:24:22.957 [2024-12-16T21:30:12.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.957 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362326 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.216 rmmod nvme_tcp 00:24:23.216 rmmod nvme_fabrics 00:24:23.216 rmmod nvme_keyring 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 362054 ']' 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 362054 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362054 ']' 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362054 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362054 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362054' 00:24:23.216 killing process with pid 362054 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362054 00:24:23.216 22:30:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362054 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.475 22:30:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.OVr 00:24:26.009 00:24:26.009 real 0m20.465s 00:24:26.009 user 0m20.923s 00:24:26.009 sys 0m9.868s 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.009 ************************************ 00:24:26.009 END TEST nvmf_fips 00:24:26.009 ************************************ 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.009 ************************************ 00:24:26.009 START TEST nvmf_control_msg_list 00:24:26.009 ************************************ 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.009 * Looking for test storage... 00:24:26.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.009 --rc genhtml_branch_coverage=1 00:24:26.009 --rc genhtml_function_coverage=1 00:24:26.009 --rc genhtml_legend=1 00:24:26.009 --rc geninfo_all_blocks=1 00:24:26.009 --rc geninfo_unexecuted_blocks=1 00:24:26.009 00:24:26.009 ' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.009 --rc genhtml_branch_coverage=1 00:24:26.009 --rc genhtml_function_coverage=1 00:24:26.009 --rc genhtml_legend=1 00:24:26.009 --rc geninfo_all_blocks=1 00:24:26.009 --rc geninfo_unexecuted_blocks=1 00:24:26.009 00:24:26.009 ' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.009 --rc genhtml_branch_coverage=1 00:24:26.009 --rc genhtml_function_coverage=1 00:24:26.009 --rc genhtml_legend=1 00:24:26.009 --rc geninfo_all_blocks=1 00:24:26.009 --rc geninfo_unexecuted_blocks=1 00:24:26.009 00:24:26.009 ' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.009 --rc genhtml_branch_coverage=1 00:24:26.009 --rc genhtml_function_coverage=1 00:24:26.009 --rc genhtml_legend=1 00:24:26.009 --rc geninfo_all_blocks=1 00:24:26.009 --rc geninfo_unexecuted_blocks=1 00:24:26.009 00:24:26.009 ' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.009 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.010 22:30:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:31.285 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:31.285 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:31.285 Found net devices under 0000:af:00.0: cvl_0_0 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:31.285 Found net devices under 0000:af:00.1: cvl_0_1 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.285 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.286 22:30:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:24:31.545 00:24:31.545 --- 10.0.0.2 ping statistics --- 00:24:31.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.545 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:24:31.545 00:24:31.545 --- 10.0.0.1 ping statistics --- 00:24:31.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.545 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=367895 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 367895 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 367895 ']' 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.545 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.804 [2024-12-16 22:30:21.270084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:31.804 [2024-12-16 22:30:21.270138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.804 [2024-12-16 22:30:21.347907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.804 [2024-12-16 22:30:21.369814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.804 [2024-12-16 22:30:21.369849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.804 [2024-12-16 22:30:21.369857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.804 [2024-12-16 22:30:21.369862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.804 [2024-12-16 22:30:21.369867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.804 [2024-12-16 22:30:21.370392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.804 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:31.805 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 [2024-12-16 22:30:21.513216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 Malloc0 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.064 [2024-12-16 22:30:21.553527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=367999 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=368000 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=368001 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 367999 00:24:32.064 22:30:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.064 [2024-12-16 22:30:21.621984] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.064 [2024-12-16 22:30:21.631906] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.064 [2024-12-16 22:30:21.642112] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:33.000 Initializing NVMe Controllers 00:24:33.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:33.000 Initialization complete. Launching workers. 00:24:33.000 ======================================================== 00:24:33.000 Latency(us) 00:24:33.000 Device Information : IOPS MiB/s Average min max 00:24:33.000 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5326.00 20.80 187.40 122.36 400.81 00:24:33.000 ======================================================== 00:24:33.000 Total : 5326.00 20.80 187.40 122.36 400.81 00:24:33.000 00:24:33.259 Initializing NVMe Controllers 00:24:33.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:33.259 Initialization complete. Launching workers. 00:24:33.259 ======================================================== 00:24:33.259 Latency(us) 00:24:33.259 Device Information : IOPS MiB/s Average min max 00:24:33.259 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 5922.94 23.14 168.48 125.45 367.47 00:24:33.259 ======================================================== 00:24:33.259 Total : 5922.94 23.14 168.48 125.45 367.47 00:24:33.259 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 368000 00:24:33.259 Initializing NVMe Controllers 00:24:33.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:33.259 Initialization complete. Launching workers. 00:24:33.259 ======================================================== 00:24:33.259 Latency(us) 00:24:33.259 Device Information : IOPS MiB/s Average min max 00:24:33.259 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 80.00 0.31 12978.30 245.79 41886.40 00:24:33.259 ======================================================== 00:24:33.259 Total : 80.00 0.31 12978.30 245.79 41886.40 00:24:33.259 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 368001 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.259 22:30:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.519 rmmod nvme_tcp 00:24:33.519 rmmod nvme_fabrics 00:24:33.519 rmmod nvme_keyring 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 367895 ']' 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 367895 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 367895 ']' 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 367895 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367895 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367895' 00:24:33.519 killing process with pid 367895 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 367895 00:24:33.519 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 367895 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.778 22:30:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.683 00:24:35.683 real 0m10.126s 00:24:35.683 user 0m6.855s 00:24:35.683 sys 0m5.352s 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:35.683 ************************************ 00:24:35.683 END TEST nvmf_control_msg_list 00:24:35.683 ************************************ 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.683 22:30:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:35.942 ************************************ 00:24:35.942 START TEST nvmf_wait_for_buf 00:24:35.942 ************************************ 00:24:35.942 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:35.942 * Looking for test storage... 00:24:35.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:35.942 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:35.942 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:35.942 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.943 --rc genhtml_branch_coverage=1 00:24:35.943 --rc genhtml_function_coverage=1 00:24:35.943 --rc genhtml_legend=1 00:24:35.943 --rc geninfo_all_blocks=1 00:24:35.943 --rc geninfo_unexecuted_blocks=1 00:24:35.943 00:24:35.943 ' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.943 --rc genhtml_branch_coverage=1 00:24:35.943 --rc genhtml_function_coverage=1 00:24:35.943 --rc genhtml_legend=1 00:24:35.943 --rc geninfo_all_blocks=1 00:24:35.943 --rc geninfo_unexecuted_blocks=1 00:24:35.943 00:24:35.943 ' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.943 --rc genhtml_branch_coverage=1 00:24:35.943 --rc genhtml_function_coverage=1 00:24:35.943 --rc genhtml_legend=1 00:24:35.943 --rc geninfo_all_blocks=1 00:24:35.943 --rc geninfo_unexecuted_blocks=1 00:24:35.943 00:24:35.943 ' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.943 --rc genhtml_branch_coverage=1 00:24:35.943 --rc genhtml_function_coverage=1 00:24:35.943 --rc genhtml_legend=1 00:24:35.943 --rc geninfo_all_blocks=1 00:24:35.943 --rc geninfo_unexecuted_blocks=1 00:24:35.943 00:24:35.943 ' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.943 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.944 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.944 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.944 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.944 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.944 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.944 22:30:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.512 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:42.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:42.513 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:42.513 Found net devices under 0000:af:00.0: cvl_0_0 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:42.513 Found net devices under 0000:af:00.1: cvl_0_1 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:24:42.513 00:24:42.513 --- 10.0.0.2 ping statistics --- 00:24:42.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.513 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:24:42.513 00:24:42.513 --- 10.0.0.1 ping statistics --- 00:24:42.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.513 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.513 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=371689 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 371689 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 371689 ']' 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 [2024-12-16 22:30:31.533954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:42.514 [2024-12-16 22:30:31.534003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.514 [2024-12-16 22:30:31.613306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.514 [2024-12-16 22:30:31.634640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.514 [2024-12-16 22:30:31.634673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.514 [2024-12-16 22:30:31.634680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.514 [2024-12-16 22:30:31.634685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.514 [2024-12-16 22:30:31.634690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.514 [2024-12-16 22:30:31.635150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 Malloc0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 [2024-12-16 22:30:31.815466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.514 [2024-12-16 22:30:31.843650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.514 22:30:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:42.514 [2024-12-16 22:30:31.923258] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:43.889 Initializing NVMe Controllers 00:24:43.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:43.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:43.889 Initialization complete. Launching workers. 00:24:43.889 ======================================================== 00:24:43.889 Latency(us) 00:24:43.889 Device Information : IOPS MiB/s Average min max 00:24:43.889 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 99.00 12.38 42290.29 31899.95 111717.21 00:24:43.889 ======================================================== 00:24:43.889 Total : 99.00 12.38 42290.29 31899.95 111717.21 00:24:43.889 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1558 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1558 -eq 0 ]] 00:24:43.889 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.890 rmmod nvme_tcp 00:24:43.890 rmmod nvme_fabrics 00:24:43.890 rmmod nvme_keyring 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 371689 ']' 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 371689 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 371689 ']' 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 371689 00:24:43.890 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371689 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371689' 00:24:44.149 killing process with pid 371689 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 371689 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 371689 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.149 22:30:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:46.684 00:24:46.684 real 0m10.466s 00:24:46.684 user 0m4.052s 00:24:46.684 sys 0m4.854s 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.684 ************************************ 00:24:46.684 END TEST nvmf_wait_for_buf 00:24:46.684 ************************************ 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:46.684 ************************************ 00:24:46.684 START TEST nvmf_fuzz 00:24:46.684 ************************************ 00:24:46.684 22:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.684 * Looking for test storage... 00:24:46.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:46.684 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.684 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.684 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.684 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.685 --rc genhtml_branch_coverage=1 00:24:46.685 --rc genhtml_function_coverage=1 00:24:46.685 --rc genhtml_legend=1 00:24:46.685 --rc geninfo_all_blocks=1 00:24:46.685 --rc geninfo_unexecuted_blocks=1 00:24:46.685 00:24:46.685 ' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.685 --rc genhtml_branch_coverage=1 00:24:46.685 --rc genhtml_function_coverage=1 00:24:46.685 --rc genhtml_legend=1 00:24:46.685 --rc geninfo_all_blocks=1 00:24:46.685 --rc geninfo_unexecuted_blocks=1 00:24:46.685 00:24:46.685 ' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.685 --rc genhtml_branch_coverage=1 00:24:46.685 --rc genhtml_function_coverage=1 00:24:46.685 --rc genhtml_legend=1 00:24:46.685 --rc geninfo_all_blocks=1 00:24:46.685 --rc geninfo_unexecuted_blocks=1 00:24:46.685 00:24:46.685 ' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.685 --rc genhtml_branch_coverage=1 00:24:46.685 --rc genhtml_function_coverage=1 00:24:46.685 --rc genhtml_legend=1 00:24:46.685 --rc geninfo_all_blocks=1 00:24:46.685 --rc geninfo_unexecuted_blocks=1 00:24:46.685 00:24:46.685 ' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.685 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.686 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.686 22:30:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:53.255 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:53.255 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:53.255 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:53.256 Found net devices under 0000:af:00.0: cvl_0_0 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:53.256 Found net devices under 0000:af:00.1: cvl_0_1 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:24:53.256 00:24:53.256 --- 10.0.0.2 ping statistics --- 00:24:53.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.256 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:53.256 22:30:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:24:53.256 00:24:53.256 --- 10.0.0.1 ping statistics --- 00:24:53.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.256 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=375401 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 375401 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 375401 ']' 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.256 Malloc0 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:53.256 22:30:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:25.332 Fuzzing completed. Shutting down the fuzz application 00:25:25.332 00:25:25.332 Dumping successful admin opcodes: 00:25:25.332 9, 10, 00:25:25.332 Dumping successful io opcodes: 00:25:25.332 0, 9, 00:25:25.332 NS: 0x2000008eff00 I/O qp, Total commands completed: 1016885, total successful commands: 5961, random_seed: 913550720 00:25:25.332 NS: 0x2000008eff00 admin qp, Total commands completed: 133696, total successful commands: 29, random_seed: 131952448 00:25:25.333 22:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:25.333 Fuzzing completed. Shutting down the fuzz application 00:25:25.333 00:25:25.333 Dumping successful admin opcodes: 00:25:25.333 00:25:25.333 Dumping successful io opcodes: 00:25:25.333 00:25:25.333 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2573907336 00:25:25.333 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 2573972430 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.333 rmmod nvme_tcp 00:25:25.333 rmmod nvme_fabrics 00:25:25.333 rmmod nvme_keyring 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 375401 ']' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 375401 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 375401 ']' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 375401 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375401 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375401' 00:25:25.333 killing process with pid 375401 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 375401 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 375401 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.333 22:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.708 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:26.967 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:26.967 00:25:26.967 real 0m40.530s 00:25:26.967 user 0m54.347s 00:25:26.967 sys 0m15.368s 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:26.968 ************************************ 00:25:26.968 END TEST nvmf_fuzz 00:25:26.968 ************************************ 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:26.968 ************************************ 00:25:26.968 START TEST nvmf_multiconnection 00:25:26.968 ************************************ 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:26.968 * Looking for test storage... 00:25:26.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:26.968 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:27.227 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:27.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.228 --rc genhtml_branch_coverage=1 00:25:27.228 --rc genhtml_function_coverage=1 00:25:27.228 --rc genhtml_legend=1 00:25:27.228 --rc geninfo_all_blocks=1 00:25:27.228 --rc geninfo_unexecuted_blocks=1 00:25:27.228 00:25:27.228 ' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:27.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.228 --rc genhtml_branch_coverage=1 00:25:27.228 --rc genhtml_function_coverage=1 00:25:27.228 --rc genhtml_legend=1 00:25:27.228 --rc geninfo_all_blocks=1 00:25:27.228 --rc geninfo_unexecuted_blocks=1 00:25:27.228 00:25:27.228 ' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:27.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.228 --rc genhtml_branch_coverage=1 00:25:27.228 --rc genhtml_function_coverage=1 00:25:27.228 --rc genhtml_legend=1 00:25:27.228 --rc geninfo_all_blocks=1 00:25:27.228 --rc geninfo_unexecuted_blocks=1 00:25:27.228 00:25:27.228 ' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:27.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:27.228 --rc genhtml_branch_coverage=1 00:25:27.228 --rc genhtml_function_coverage=1 00:25:27.228 --rc genhtml_legend=1 00:25:27.228 --rc geninfo_all_blocks=1 00:25:27.228 --rc geninfo_unexecuted_blocks=1 00:25:27.228 00:25:27.228 ' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:27.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:27.228 22:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:33.818 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:33.818 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.818 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:33.818 Found net devices under 0000:af:00.0: cvl_0_0 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:33.819 Found net devices under 0000:af:00.1: cvl_0_1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:25:33.819 00:25:33.819 --- 10.0.0.2 ping statistics --- 00:25:33.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.819 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:25:33.819 00:25:33.819 --- 10.0.0.1 ping statistics --- 00:25:33.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.819 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=383988 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 383988 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 383988 ']' 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 [2024-12-16 22:31:22.674622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:33.819 [2024-12-16 22:31:22.674662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.819 [2024-12-16 22:31:22.751257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.819 [2024-12-16 22:31:22.775144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.819 [2024-12-16 22:31:22.775183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.819 [2024-12-16 22:31:22.775190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.819 [2024-12-16 22:31:22.775200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.819 [2024-12-16 22:31:22.775204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.819 [2024-12-16 22:31:22.776617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.819 [2024-12-16 22:31:22.776728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.819 [2024-12-16 22:31:22.776831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.819 [2024-12-16 22:31:22.776832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 [2024-12-16 22:31:22.908488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 Malloc1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:33.819 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 [2024-12-16 22:31:22.976195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 Malloc2 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 Malloc3 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 Malloc4 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 Malloc5 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 Malloc6 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 Malloc7 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:33.820 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 Malloc8 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 Malloc9 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 Malloc10 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 Malloc11 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.821 22:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:35.198 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:35.198 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.198 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.198 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.198 22:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.100 22:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:38.037 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:38.037 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.037 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.037 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.037 22:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.571 22:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:41.508 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:41.508 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.508 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.508 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.508 22:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.411 22:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:44.788 22:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:44.788 22:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.788 22:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.788 22:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.788 22:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.690 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.690 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.690 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:46.690 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.691 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.691 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.691 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.691 22:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:48.067 22:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:48.067 22:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.067 22:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.067 22:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.067 22:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.970 22:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:51.349 22:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:51.349 22:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.349 22:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.349 22:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.349 22:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.252 22:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:54.629 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:54.629 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.629 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.629 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.629 22:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:56.532 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:56.532 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:56.532 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:56.533 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:56.533 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.533 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:56.533 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.533 22:31:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:57.910 22:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:57.910 22:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:57.910 22:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.910 22:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:57.910 22:31:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:59.819 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:59.819 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:59.820 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:59.820 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:59.820 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.820 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:59.820 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.820 22:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:01.726 22:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:01.726 22:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.726 22:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.726 22:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.726 22:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.629 22:31:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:05.006 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:05.006 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.006 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.006 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.006 22:31:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:06.912 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:06.912 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:06.913 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:06.913 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:06.913 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.913 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:06.913 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.913 22:31:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:08.814 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:08.814 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.814 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.814 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.814 22:31:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.717 22:32:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:10.717 [global] 00:26:10.717 thread=1 00:26:10.717 invalidate=1 00:26:10.717 rw=read 00:26:10.717 time_based=1 00:26:10.717 runtime=10 00:26:10.717 ioengine=libaio 00:26:10.717 direct=1 00:26:10.717 bs=262144 00:26:10.717 iodepth=64 00:26:10.717 norandommap=1 00:26:10.717 numjobs=1 00:26:10.717 00:26:10.717 [job0] 00:26:10.717 filename=/dev/nvme0n1 00:26:10.717 [job1] 00:26:10.717 filename=/dev/nvme10n1 00:26:10.717 [job2] 00:26:10.717 filename=/dev/nvme1n1 00:26:10.717 [job3] 00:26:10.717 filename=/dev/nvme2n1 00:26:10.717 [job4] 00:26:10.717 filename=/dev/nvme3n1 00:26:10.717 [job5] 00:26:10.717 filename=/dev/nvme4n1 00:26:10.717 [job6] 00:26:10.717 filename=/dev/nvme5n1 00:26:10.717 [job7] 00:26:10.717 filename=/dev/nvme6n1 00:26:10.718 [job8] 00:26:10.718 filename=/dev/nvme7n1 00:26:10.718 [job9] 00:26:10.718 filename=/dev/nvme8n1 00:26:10.718 [job10] 00:26:10.718 filename=/dev/nvme9n1 00:26:10.718 Could not set queue depth (nvme0n1) 00:26:10.718 Could not set queue depth (nvme10n1) 00:26:10.718 Could not set queue depth (nvme1n1) 00:26:10.718 Could not set queue depth (nvme2n1) 00:26:10.718 Could not set queue depth (nvme3n1) 00:26:10.718 Could not set queue depth (nvme4n1) 00:26:10.718 Could not set queue depth (nvme5n1) 00:26:10.718 Could not set queue depth (nvme6n1) 00:26:10.718 Could not set queue depth (nvme7n1) 00:26:10.718 Could not set queue depth (nvme8n1) 00:26:10.718 Could not set queue depth (nvme9n1) 00:26:10.976 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.976 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.977 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.977 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.977 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:10.977 fio-3.35 00:26:10.977 Starting 11 threads 00:26:23.183 00:26:23.183 job0: (groupid=0, jobs=1): err= 0: pid=390524: Mon Dec 16 22:32:11 2024 00:26:23.183 read: IOPS=244, BW=61.0MiB/s (64.0MB/s)(619MiB/10137msec) 00:26:23.183 slat (usec): min=9, max=382802, avg=2599.82, stdev=15742.45 00:26:23.183 clat (msec): min=9, max=1003, avg=259.34, stdev=197.28 00:26:23.183 lat (msec): min=9, max=1003, avg=261.94, stdev=199.82 00:26:23.183 clat percentiles (msec): 00:26:23.183 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 79], 00:26:23.183 | 30.00th=[ 110], 40.00th=[ 155], 50.00th=[ 255], 60.00th=[ 292], 00:26:23.183 | 70.00th=[ 317], 80.00th=[ 376], 90.00th=[ 550], 95.00th=[ 634], 00:26:23.183 | 99.00th=[ 894], 99.50th=[ 961], 99.90th=[ 961], 99.95th=[ 1003], 00:26:23.183 | 99.99th=[ 1003] 00:26:23.183 bw ( KiB/s): min=17408, max=171008, per=6.58%, avg=61721.60, stdev=39899.87, samples=20 00:26:23.183 iops : min= 68, max= 668, avg=241.10, stdev=155.86, samples=20 00:26:23.183 lat (msec) : 10=0.16%, 20=0.61%, 50=10.06%, 100=17.02%, 250=21.46% 00:26:23.183 lat (msec) : 500=37.35%, 750=11.20%, 1000=2.06%, 2000=0.08% 00:26:23.183 cpu : usr=0.09%, sys=0.81%, ctx=456, majf=0, minf=4097 00:26:23.183 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:23.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.183 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.183 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.183 job1: (groupid=0, jobs=1): err= 0: pid=390525: Mon Dec 16 22:32:11 2024 00:26:23.183 read: IOPS=243, BW=60.8MiB/s (63.7MB/s)(615MiB/10113msec) 00:26:23.183 slat (usec): min=15, max=272310, avg=3191.00, stdev=16028.65 00:26:23.183 clat (usec): min=551, max=1038.3k, avg=259727.42, stdev=246224.74 00:26:23.183 lat (usec): min=577, max=1038.4k, avg=262918.43, stdev=249260.43 00:26:23.183 clat percentiles (msec): 00:26:23.183 | 1.00th=[ 7], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 52], 00:26:23.183 | 30.00th=[ 55], 40.00th=[ 73], 50.00th=[ 150], 60.00th=[ 215], 00:26:23.183 | 70.00th=[ 347], 80.00th=[ 575], 90.00th=[ 634], 95.00th=[ 684], 00:26:23.183 | 99.00th=[ 927], 99.50th=[ 986], 99.90th=[ 1036], 99.95th=[ 1036], 00:26:23.183 | 99.99th=[ 1036] 00:26:23.183 bw ( KiB/s): min=20992, max=284160, per=6.54%, avg=61312.00, stdev=67831.76, samples=20 00:26:23.183 iops : min= 82, max= 1110, avg=239.50, stdev=264.97, samples=20 00:26:23.183 lat (usec) : 750=0.16%, 1000=0.12% 00:26:23.183 lat (msec) : 2=0.41%, 4=0.16%, 10=0.37%, 20=0.61%, 50=12.81% 00:26:23.183 lat (msec) : 100=27.00%, 250=21.43%, 500=11.75%, 750=22.24%, 1000=2.64% 00:26:23.183 lat (msec) : 2000=0.28% 00:26:23.183 cpu : usr=0.16%, sys=1.06%, ctx=517, majf=0, minf=4097 00:26:23.183 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:23.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.183 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.183 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.183 job2: (groupid=0, jobs=1): err= 0: pid=390526: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=352, BW=88.2MiB/s (92.5MB/s)(894MiB/10134msec) 00:26:23.184 slat (usec): min=14, max=360101, avg=2055.34, stdev=12523.22 00:26:23.184 clat (usec): min=1708, max=1242.0k, avg=179199.76, stdev=200961.18 00:26:23.184 lat (usec): min=1746, max=1242.0k, avg=181255.10, stdev=202847.78 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 50], 20.00th=[ 66], 00:26:23.184 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 103], 00:26:23.184 | 70.00th=[ 148], 80.00th=[ 288], 90.00th=[ 456], 95.00th=[ 625], 00:26:23.184 | 99.00th=[ 961], 99.50th=[ 1116], 99.90th=[ 1200], 99.95th=[ 1250], 00:26:23.184 | 99.99th=[ 1250] 00:26:23.184 bw ( KiB/s): min=16384, max=278528, per=9.58%, avg=89860.25, stdev=79252.60, samples=20 00:26:23.184 iops : min= 64, max= 1088, avg=351.00, stdev=309.59, samples=20 00:26:23.184 lat (msec) : 2=0.06%, 4=0.06%, 10=4.25%, 50=5.68%, 100=49.30% 00:26:23.184 lat (msec) : 250=17.04%, 500=14.58%, 750=6.21%, 1000=2.29%, 2000=0.53% 00:26:23.184 cpu : usr=0.08%, sys=1.53%, ctx=623, majf=0, minf=4097 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=3574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job3: (groupid=0, jobs=1): err= 0: pid=390527: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=354, BW=88.6MiB/s (92.9MB/s)(901MiB/10177msec) 00:26:23.184 slat (usec): min=18, max=294300, avg=2785.04, stdev=13676.48 00:26:23.184 clat (msec): min=29, max=1149, avg=177.71, stdev=181.07 00:26:23.184 lat (msec): min=29, max=1150, avg=180.49, stdev=183.83 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 54], 5.00th=[ 61], 10.00th=[ 65], 20.00th=[ 69], 00:26:23.184 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 112], 00:26:23.184 | 70.00th=[ 161], 80.00th=[ 296], 90.00th=[ 401], 95.00th=[ 575], 00:26:23.184 | 99.00th=[ 911], 99.50th=[ 1036], 99.90th=[ 1116], 99.95th=[ 1116], 00:26:23.184 | 99.99th=[ 1150] 00:26:23.184 bw ( KiB/s): min=13824, max=241152, per=9.66%, avg=90663.85, stdev=75945.29, samples=20 00:26:23.184 iops : min= 54, max= 942, avg=354.15, stdev=296.66, samples=20 00:26:23.184 lat (msec) : 50=0.61%, 100=52.43%, 250=24.38%, 500=15.89%, 750=4.60% 00:26:23.184 lat (msec) : 1000=1.53%, 2000=0.55% 00:26:23.184 cpu : usr=0.17%, sys=1.48%, ctx=527, majf=0, minf=4097 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=3605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job4: (groupid=0, jobs=1): err= 0: pid=390528: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=515, BW=129MiB/s (135MB/s)(1308MiB/10140msec) 00:26:23.184 slat (usec): min=16, max=258033, avg=1498.07, stdev=9046.24 00:26:23.184 clat (usec): min=1332, max=927791, avg=122434.62, stdev=172912.36 00:26:23.184 lat (usec): min=1382, max=1017.8k, avg=123932.69, stdev=174816.50 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 45], 20.00th=[ 47], 00:26:23.184 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 52], 60.00th=[ 62], 00:26:23.184 | 70.00th=[ 67], 80.00th=[ 88], 90.00th=[ 422], 95.00th=[ 592], 00:26:23.184 | 99.00th=[ 793], 99.50th=[ 835], 99.90th=[ 911], 99.95th=[ 911], 00:26:23.184 | 99.99th=[ 927] 00:26:23.184 bw ( KiB/s): min=12288, max=336896, per=14.10%, avg=132275.20, stdev=122054.80, samples=20 00:26:23.184 iops : min= 48, max= 1316, avg=516.70, stdev=476.78, samples=20 00:26:23.184 lat (msec) : 2=0.08%, 4=0.15%, 10=0.63%, 20=1.07%, 50=42.71% 00:26:23.184 lat (msec) : 100=37.32%, 250=3.88%, 500=6.25%, 750=6.71%, 1000=1.20% 00:26:23.184 cpu : usr=0.20%, sys=2.20%, ctx=1252, majf=0, minf=3722 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=5231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job5: (groupid=0, jobs=1): err= 0: pid=390529: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=173, BW=43.5MiB/s (45.6MB/s)(441MiB/10149msec) 00:26:23.184 slat (usec): min=13, max=191401, avg=3870.03, stdev=16742.34 00:26:23.184 clat (usec): min=565, max=1004.4k, avg=363937.58, stdev=252226.99 00:26:23.184 lat (usec): min=739, max=1038.8k, avg=367807.60, stdev=255897.91 00:26:23.184 clat percentiles (usec): 00:26:23.184 | 1.00th=[ 758], 5.00th=[ 2474], 10.00th=[ 2769], 00:26:23.184 | 20.00th=[ 57934], 30.00th=[ 160433], 40.00th=[ 316670], 00:26:23.184 | 50.00th=[ 392168], 60.00th=[ 492831], 70.00th=[ 557843], 00:26:23.184 | 80.00th=[ 608175], 90.00th=[ 650118], 95.00th=[ 692061], 00:26:23.184 | 99.00th=[ 926942], 99.50th=[ 926942], 99.90th=[1002439], 00:26:23.184 | 99.95th=[1002439], 99.99th=[1002439] 00:26:23.184 bw ( KiB/s): min=21504, max=122880, per=4.64%, avg=43524.65, stdev=26876.27, samples=20 00:26:23.184 iops : min= 84, max= 480, avg=170.00, stdev=104.98, samples=20 00:26:23.184 lat (usec) : 750=0.68%, 1000=1.25% 00:26:23.184 lat (msec) : 2=1.70%, 4=11.11%, 10=2.72%, 50=1.02%, 100=6.63% 00:26:23.184 lat (msec) : 250=9.81%, 500=25.40%, 750=37.24%, 1000=2.32%, 2000=0.11% 00:26:23.184 cpu : usr=0.17%, sys=0.71%, ctx=578, majf=0, minf=4097 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=1764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job6: (groupid=0, jobs=1): err= 0: pid=390530: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=148, BW=37.1MiB/s (38.9MB/s)(376MiB/10148msec) 00:26:23.184 slat (usec): min=16, max=503432, avg=4168.39, stdev=21194.54 00:26:23.184 clat (msec): min=28, max=1197, avg=426.94, stdev=229.16 00:26:23.184 lat (msec): min=28, max=1197, avg=431.11, stdev=231.62 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 43], 5.00th=[ 94], 10.00th=[ 134], 20.00th=[ 201], 00:26:23.184 | 30.00th=[ 245], 40.00th=[ 309], 50.00th=[ 456], 60.00th=[ 542], 00:26:23.184 | 70.00th=[ 592], 80.00th=[ 634], 90.00th=[ 667], 95.00th=[ 718], 00:26:23.184 | 99.00th=[ 1045], 99.50th=[ 1045], 99.90th=[ 1133], 99.95th=[ 1200], 00:26:23.184 | 99.99th=[ 1200] 00:26:23.184 bw ( KiB/s): min=10240, max=82432, per=3.93%, avg=36918.25, stdev=20393.90, samples=20 00:26:23.184 iops : min= 40, max= 322, avg=144.20, stdev=79.67, samples=20 00:26:23.184 lat (msec) : 50=1.26%, 100=4.72%, 250=24.92%, 500=22.79%, 750=42.13% 00:26:23.184 lat (msec) : 1000=1.79%, 2000=2.39% 00:26:23.184 cpu : usr=0.07%, sys=0.61%, ctx=262, majf=0, minf=4097 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=1505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job7: (groupid=0, jobs=1): err= 0: pid=390531: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=606, BW=152MiB/s (159MB/s)(1533MiB/10116msec) 00:26:23.184 slat (usec): min=11, max=180321, avg=1566.21, stdev=8331.24 00:26:23.184 clat (msec): min=14, max=772, avg=103.88, stdev=160.08 00:26:23.184 lat (msec): min=14, max=783, avg=105.45, stdev=162.59 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 32], 00:26:23.184 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 42], 00:26:23.184 | 70.00th=[ 53], 80.00th=[ 96], 90.00th=[ 338], 95.00th=[ 575], 00:26:23.184 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 743], 99.95th=[ 760], 00:26:23.184 | 99.99th=[ 776] 00:26:23.184 bw ( KiB/s): min=15872, max=502784, per=16.56%, avg=155371.20, stdev=180806.58, samples=20 00:26:23.184 iops : min= 62, max= 1964, avg=606.90, stdev=706.29, samples=20 00:26:23.184 lat (msec) : 20=0.41%, 50=69.00%, 100=11.51%, 250=7.79%, 500=4.55% 00:26:23.184 lat (msec) : 750=6.65%, 1000=0.08% 00:26:23.184 cpu : usr=0.21%, sys=2.13%, ctx=762, majf=0, minf=4097 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=6133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job8: (groupid=0, jobs=1): err= 0: pid=390532: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=567, BW=142MiB/s (149MB/s)(1439MiB/10147msec) 00:26:23.184 slat (usec): min=9, max=219613, avg=1739.78, stdev=8669.00 00:26:23.184 clat (msec): min=20, max=725, avg=110.98, stdev=124.86 00:26:23.184 lat (msec): min=20, max=725, avg=112.72, stdev=126.76 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 28], 00:26:23.184 | 30.00th=[ 31], 40.00th=[ 33], 50.00th=[ 54], 60.00th=[ 81], 00:26:23.184 | 70.00th=[ 113], 80.00th=[ 205], 90.00th=[ 268], 95.00th=[ 397], 00:26:23.184 | 99.00th=[ 600], 99.50th=[ 625], 99.90th=[ 684], 99.95th=[ 709], 00:26:23.184 | 99.99th=[ 726] 00:26:23.184 bw ( KiB/s): min=26112, max=542208, per=15.53%, avg=145689.60, stdev=159654.36, samples=20 00:26:23.184 iops : min= 102, max= 2118, avg=569.10, stdev=623.65, samples=20 00:26:23.184 lat (msec) : 50=48.64%, 100=18.73%, 250=21.03%, 500=9.30%, 750=2.31% 00:26:23.184 cpu : usr=0.20%, sys=2.25%, ctx=790, majf=0, minf=4097 00:26:23.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:23.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.184 issued rwts: total=5755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.184 job9: (groupid=0, jobs=1): err= 0: pid=390533: Mon Dec 16 22:32:11 2024 00:26:23.184 read: IOPS=234, BW=58.6MiB/s (61.4MB/s)(595MiB/10150msec) 00:26:23.184 slat (usec): min=16, max=541759, avg=3916.26, stdev=19811.11 00:26:23.184 clat (msec): min=11, max=1176, avg=268.98, stdev=182.61 00:26:23.184 lat (msec): min=11, max=1520, avg=272.89, stdev=184.88 00:26:23.184 clat percentiles (msec): 00:26:23.184 | 1.00th=[ 37], 5.00th=[ 94], 10.00th=[ 114], 20.00th=[ 138], 00:26:23.184 | 30.00th=[ 171], 40.00th=[ 218], 50.00th=[ 243], 60.00th=[ 259], 00:26:23.185 | 70.00th=[ 279], 80.00th=[ 330], 90.00th=[ 439], 95.00th=[ 609], 00:26:23.185 | 99.00th=[ 1167], 99.50th=[ 1167], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:23.185 | 99.99th=[ 1183] 00:26:23.185 bw ( KiB/s): min=14336, max=135168, per=6.65%, avg=62383.16, stdev=30966.60, samples=19 00:26:23.185 iops : min= 56, max= 528, avg=243.68, stdev=120.96, samples=19 00:26:23.185 lat (msec) : 20=0.34%, 50=1.14%, 100=4.50%, 250=49.07%, 500=36.88% 00:26:23.185 lat (msec) : 750=4.88%, 1000=1.64%, 2000=1.56% 00:26:23.185 cpu : usr=0.14%, sys=0.87%, ctx=328, majf=0, minf=4098 00:26:23.185 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:23.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.185 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.185 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.185 job10: (groupid=0, jobs=1): err= 0: pid=390534: Mon Dec 16 22:32:11 2024 00:26:23.185 read: IOPS=239, BW=59.8MiB/s (62.7MB/s)(605MiB/10114msec) 00:26:23.185 slat (usec): min=11, max=247019, avg=3419.36, stdev=15658.90 00:26:23.185 clat (msec): min=10, max=945, avg=263.89, stdev=236.09 00:26:23.185 lat (msec): min=10, max=945, avg=267.31, stdev=239.37 00:26:23.185 clat percentiles (msec): 00:26:23.185 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 47], 00:26:23.185 | 30.00th=[ 81], 40.00th=[ 102], 50.00th=[ 138], 60.00th=[ 292], 00:26:23.185 | 70.00th=[ 397], 80.00th=[ 542], 90.00th=[ 625], 95.00th=[ 684], 00:26:23.185 | 99.00th=[ 810], 99.50th=[ 852], 99.90th=[ 944], 99.95th=[ 944], 00:26:23.185 | 99.99th=[ 944] 00:26:23.185 bw ( KiB/s): min=18432, max=268800, per=6.43%, avg=60288.00, stdev=61810.28, samples=20 00:26:23.185 iops : min= 72, max= 1050, avg=235.50, stdev=241.45, samples=20 00:26:23.185 lat (msec) : 20=1.94%, 50=20.71%, 100=17.03%, 250=18.27%, 500=19.02% 00:26:23.185 lat (msec) : 750=20.88%, 1000=2.15% 00:26:23.185 cpu : usr=0.18%, sys=0.87%, ctx=430, majf=0, minf=4097 00:26:23.185 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:23.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.185 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.185 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.185 00:26:23.185 Run status group 0 (all jobs): 00:26:23.185 READ: bw=916MiB/s (961MB/s), 37.1MiB/s-152MiB/s (38.9MB/s-159MB/s), io=9324MiB (9777MB), run=10113-10177msec 00:26:23.185 00:26:23.185 Disk stats (read/write): 00:26:23.185 nvme0n1: ios=4813/0, merge=0/0, ticks=1225467/0, in_queue=1225467, util=97.32% 00:26:23.185 nvme10n1: ios=4758/0, merge=0/0, ticks=1226792/0, in_queue=1226792, util=97.49% 00:26:23.185 nvme1n1: ios=7011/0, merge=0/0, ticks=1228504/0, in_queue=1228504, util=97.77% 00:26:23.185 nvme2n1: ios=7056/0, merge=0/0, ticks=1226332/0, in_queue=1226332, util=97.89% 00:26:23.185 nvme3n1: ios=10335/0, merge=0/0, ticks=1230309/0, in_queue=1230309, util=98.01% 00:26:23.185 nvme4n1: ios=3400/0, merge=0/0, ticks=1226678/0, in_queue=1226678, util=98.33% 00:26:23.185 nvme5n1: ios=2872/0, merge=0/0, ticks=1234259/0, in_queue=1234259, util=98.46% 00:26:23.185 nvme6n1: ios=12083/0, merge=0/0, ticks=1214001/0, in_queue=1214001, util=98.58% 00:26:23.185 nvme7n1: ios=11383/0, merge=0/0, ticks=1226225/0, in_queue=1226225, util=98.96% 00:26:23.185 nvme8n1: ios=4612/0, merge=0/0, ticks=1225257/0, in_queue=1225257, util=99.15% 00:26:23.185 nvme9n1: ios=4679/0, merge=0/0, ticks=1228756/0, in_queue=1228756, util=99.28% 00:26:23.185 22:32:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:23.185 [global] 00:26:23.185 thread=1 00:26:23.185 invalidate=1 00:26:23.185 rw=randwrite 00:26:23.185 time_based=1 00:26:23.185 runtime=10 00:26:23.185 ioengine=libaio 00:26:23.185 direct=1 00:26:23.185 bs=262144 00:26:23.185 iodepth=64 00:26:23.185 norandommap=1 00:26:23.185 numjobs=1 00:26:23.185 00:26:23.185 [job0] 00:26:23.185 filename=/dev/nvme0n1 00:26:23.185 [job1] 00:26:23.185 filename=/dev/nvme10n1 00:26:23.185 [job2] 00:26:23.185 filename=/dev/nvme1n1 00:26:23.185 [job3] 00:26:23.185 filename=/dev/nvme2n1 00:26:23.185 [job4] 00:26:23.185 filename=/dev/nvme3n1 00:26:23.185 [job5] 00:26:23.185 filename=/dev/nvme4n1 00:26:23.185 [job6] 00:26:23.185 filename=/dev/nvme5n1 00:26:23.185 [job7] 00:26:23.185 filename=/dev/nvme6n1 00:26:23.185 [job8] 00:26:23.185 filename=/dev/nvme7n1 00:26:23.185 [job9] 00:26:23.185 filename=/dev/nvme8n1 00:26:23.185 [job10] 00:26:23.185 filename=/dev/nvme9n1 00:26:23.185 Could not set queue depth (nvme0n1) 00:26:23.185 Could not set queue depth (nvme10n1) 00:26:23.185 Could not set queue depth (nvme1n1) 00:26:23.185 Could not set queue depth (nvme2n1) 00:26:23.185 Could not set queue depth (nvme3n1) 00:26:23.185 Could not set queue depth (nvme4n1) 00:26:23.185 Could not set queue depth (nvme5n1) 00:26:23.185 Could not set queue depth (nvme6n1) 00:26:23.185 Could not set queue depth (nvme7n1) 00:26:23.185 Could not set queue depth (nvme8n1) 00:26:23.185 Could not set queue depth (nvme9n1) 00:26:23.185 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:23.185 fio-3.35 00:26:23.185 Starting 11 threads 00:26:33.163 00:26:33.163 job0: (groupid=0, jobs=1): err= 0: pid=391557: Mon Dec 16 22:32:22 2024 00:26:33.163 write: IOPS=487, BW=122MiB/s (128MB/s)(1235MiB/10139msec); 0 zone resets 00:26:33.163 slat (usec): min=16, max=59207, avg=1791.04, stdev=4378.29 00:26:33.163 clat (msec): min=4, max=353, avg=129.46, stdev=83.06 00:26:33.163 lat (msec): min=4, max=353, avg=131.25, stdev=84.23 00:26:33.163 clat percentiles (msec): 00:26:33.163 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 36], 00:26:33.163 | 30.00th=[ 39], 40.00th=[ 103], 50.00th=[ 113], 60.00th=[ 153], 00:26:33.163 | 70.00th=[ 192], 80.00th=[ 224], 90.00th=[ 249], 95.00th=[ 264], 00:26:33.163 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 338], 99.95th=[ 338], 00:26:33.163 | 99.99th=[ 355] 00:26:33.163 bw ( KiB/s): min=61440, max=461312, per=10.54%, avg=124843.85, stdev=98070.78, samples=20 00:26:33.163 iops : min= 240, max= 1802, avg=487.65, stdev=383.10, samples=20 00:26:33.163 lat (msec) : 10=0.10%, 20=0.36%, 50=31.17%, 100=7.41%, 250=51.69% 00:26:33.163 lat (msec) : 500=9.27% 00:26:33.163 cpu : usr=1.00%, sys=1.21%, ctx=1778, majf=0, minf=1 00:26:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.163 issued rwts: total=0,4941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.163 job1: (groupid=0, jobs=1): err= 0: pid=391569: Mon Dec 16 22:32:22 2024 00:26:33.163 write: IOPS=493, BW=123MiB/s (129MB/s)(1242MiB/10062msec); 0 zone resets 00:26:33.163 slat (usec): min=20, max=36151, avg=1960.77, stdev=4017.68 00:26:33.163 clat (msec): min=22, max=298, avg=127.59, stdev=62.95 00:26:33.163 lat (msec): min=22, max=304, avg=129.55, stdev=63.81 00:26:33.163 clat percentiles (msec): 00:26:33.163 | 1.00th=[ 65], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 75], 00:26:33.163 | 30.00th=[ 77], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 122], 00:26:33.163 | 70.00th=[ 142], 80.00th=[ 171], 90.00th=[ 245], 95.00th=[ 271], 00:26:33.163 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 296], 99.95th=[ 300], 00:26:33.163 | 99.99th=[ 300] 00:26:33.163 bw ( KiB/s): min=57856, max=223744, per=10.60%, avg=125574.95, stdev=51683.64, samples=20 00:26:33.163 iops : min= 226, max= 874, avg=490.50, stdev=201.92, samples=20 00:26:33.163 lat (msec) : 50=0.30%, 100=41.59%, 250=49.03%, 500=9.08% 00:26:33.163 cpu : usr=1.30%, sys=1.56%, ctx=1334, majf=0, minf=1 00:26:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.163 issued rwts: total=0,4968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.163 job2: (groupid=0, jobs=1): err= 0: pid=391570: Mon Dec 16 22:32:22 2024 00:26:33.163 write: IOPS=796, BW=199MiB/s (209MB/s)(2004MiB/10063msec); 0 zone resets 00:26:33.163 slat (usec): min=21, max=79233, avg=1110.44, stdev=2574.19 00:26:33.163 clat (msec): min=4, max=273, avg=79.23, stdev=45.08 00:26:33.163 lat (msec): min=5, max=273, avg=80.34, stdev=45.64 00:26:33.163 clat percentiles (msec): 00:26:33.163 | 1.00th=[ 19], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:26:33.163 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 69], 60.00th=[ 75], 00:26:33.163 | 70.00th=[ 91], 80.00th=[ 108], 90.00th=[ 148], 95.00th=[ 182], 00:26:33.163 | 99.00th=[ 232], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 262], 00:26:33.163 | 99.99th=[ 275] 00:26:33.163 bw ( KiB/s): min=71823, max=404992, per=17.18%, avg=203507.25, stdev=92209.32, samples=20 00:26:33.163 iops : min= 280, max= 1582, avg=794.90, stdev=360.24, samples=20 00:26:33.163 lat (msec) : 10=0.12%, 20=1.09%, 50=26.82%, 100=46.26%, 250=25.24% 00:26:33.163 lat (msec) : 500=0.47% 00:26:33.163 cpu : usr=1.53%, sys=2.26%, ctx=2562, majf=0, minf=2 00:26:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.163 issued rwts: total=0,8014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.163 job3: (groupid=0, jobs=1): err= 0: pid=391571: Mon Dec 16 22:32:22 2024 00:26:33.163 write: IOPS=393, BW=98.4MiB/s (103MB/s)(993MiB/10088msec); 0 zone resets 00:26:33.163 slat (usec): min=27, max=75782, avg=2002.50, stdev=5210.13 00:26:33.163 clat (usec): min=867, max=394008, avg=160511.85, stdev=88748.56 00:26:33.163 lat (usec): min=927, max=394056, avg=162514.35, stdev=89922.22 00:26:33.163 clat percentiles (msec): 00:26:33.163 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 25], 20.00th=[ 92], 00:26:33.163 | 30.00th=[ 114], 40.00th=[ 125], 50.00th=[ 174], 60.00th=[ 184], 00:26:33.163 | 70.00th=[ 199], 80.00th=[ 226], 90.00th=[ 292], 95.00th=[ 317], 00:26:33.163 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 393], 00:26:33.163 | 99.99th=[ 393] 00:26:33.163 bw ( KiB/s): min=48640, max=171520, per=8.44%, avg=100027.95, stdev=33375.43, samples=20 00:26:33.163 iops : min= 190, max= 670, avg=390.70, stdev=130.39, samples=20 00:26:33.163 lat (usec) : 1000=0.13% 00:26:33.163 lat (msec) : 2=0.55%, 4=1.84%, 10=2.22%, 20=4.26%, 50=4.86% 00:26:33.163 lat (msec) : 100=7.56%, 250=62.07%, 500=16.52% 00:26:33.163 cpu : usr=0.87%, sys=1.23%, ctx=1980, majf=0, minf=1 00:26:33.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:33.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.163 issued rwts: total=0,3970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.163 job4: (groupid=0, jobs=1): err= 0: pid=391572: Mon Dec 16 22:32:22 2024 00:26:33.163 write: IOPS=457, BW=114MiB/s (120MB/s)(1157MiB/10116msec); 0 zone resets 00:26:33.163 slat (usec): min=24, max=178343, avg=1966.21, stdev=6771.13 00:26:33.163 clat (usec): min=968, max=506074, avg=137832.85, stdev=91497.51 00:26:33.164 lat (usec): min=1025, max=522354, avg=139799.06, stdev=92650.01 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 48], 00:26:33.164 | 30.00th=[ 81], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 150], 00:26:33.164 | 70.00th=[ 184], 80.00th=[ 218], 90.00th=[ 271], 95.00th=[ 296], 00:26:33.164 | 99.00th=[ 388], 99.50th=[ 443], 99.90th=[ 489], 99.95th=[ 506], 00:26:33.164 | 99.99th=[ 506] 00:26:33.164 bw ( KiB/s): min=34304, max=286781, per=9.87%, avg=116867.05, stdev=65311.28, samples=20 00:26:33.164 iops : min= 134, max= 1120, avg=456.50, stdev=255.09, samples=20 00:26:33.164 lat (usec) : 1000=0.04% 00:26:33.164 lat (msec) : 2=0.41%, 4=1.02%, 10=2.07%, 20=4.49%, 50=13.98% 00:26:33.164 lat (msec) : 100=11.71%, 250=52.77%, 500=13.42%, 750=0.09% 00:26:33.164 cpu : usr=1.22%, sys=1.34%, ctx=1772, majf=0, minf=2 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,4628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 job5: (groupid=0, jobs=1): err= 0: pid=391575: Mon Dec 16 22:32:22 2024 00:26:33.164 write: IOPS=335, BW=84.0MiB/s (88.1MB/s)(849MiB/10104msec); 0 zone resets 00:26:33.164 slat (usec): min=22, max=92399, avg=2232.99, stdev=5736.86 00:26:33.164 clat (usec): min=1414, max=378988, avg=187931.27, stdev=87568.05 00:26:33.164 lat (usec): min=1459, max=379034, avg=190164.26, stdev=88772.61 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 14], 5.00th=[ 52], 10.00th=[ 75], 20.00th=[ 107], 00:26:33.164 | 30.00th=[ 117], 40.00th=[ 150], 50.00th=[ 184], 60.00th=[ 230], 00:26:33.164 | 70.00th=[ 257], 80.00th=[ 275], 90.00th=[ 296], 95.00th=[ 321], 00:26:33.164 | 99.00th=[ 368], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:26:33.164 | 99.99th=[ 380] 00:26:33.164 bw ( KiB/s): min=48542, max=177152, per=7.20%, avg=85251.85, stdev=32586.78, samples=20 00:26:33.164 iops : min= 189, max= 692, avg=332.95, stdev=127.33, samples=20 00:26:33.164 lat (msec) : 2=0.06%, 4=0.09%, 10=0.29%, 20=0.94%, 50=3.09% 00:26:33.164 lat (msec) : 100=11.79%, 250=51.74%, 500=32.00% 00:26:33.164 cpu : usr=0.78%, sys=1.11%, ctx=1706, majf=0, minf=1 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,3394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 job6: (groupid=0, jobs=1): err= 0: pid=391576: Mon Dec 16 22:32:22 2024 00:26:33.164 write: IOPS=447, BW=112MiB/s (117MB/s)(1131MiB/10115msec); 0 zone resets 00:26:33.164 slat (usec): min=27, max=27545, avg=1923.04, stdev=4392.83 00:26:33.164 clat (msec): min=2, max=316, avg=141.13, stdev=75.82 00:26:33.164 lat (msec): min=2, max=316, avg=143.06, stdev=76.86 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 67], 00:26:33.164 | 30.00th=[ 92], 40.00th=[ 115], 50.00th=[ 123], 60.00th=[ 157], 00:26:33.164 | 70.00th=[ 182], 80.00th=[ 218], 90.00th=[ 262], 95.00th=[ 271], 00:26:33.164 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 317], 00:26:33.164 | 99.99th=[ 317] 00:26:33.164 bw ( KiB/s): min=55296, max=321536, per=9.64%, avg=114143.55, stdev=65470.12, samples=20 00:26:33.164 iops : min= 216, max= 1256, avg=445.80, stdev=255.73, samples=20 00:26:33.164 lat (msec) : 4=0.04%, 10=0.07%, 20=0.13%, 50=15.79%, 100=16.19% 00:26:33.164 lat (msec) : 250=54.84%, 500=12.94% 00:26:33.164 cpu : usr=1.01%, sys=1.38%, ctx=1682, majf=0, minf=1 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,4522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 job7: (groupid=0, jobs=1): err= 0: pid=391577: Mon Dec 16 22:32:22 2024 00:26:33.164 write: IOPS=328, BW=82.0MiB/s (86.0MB/s)(832MiB/10136msec); 0 zone resets 00:26:33.164 slat (usec): min=27, max=137211, avg=2556.45, stdev=6619.61 00:26:33.164 clat (msec): min=5, max=362, avg=192.33, stdev=74.01 00:26:33.164 lat (msec): min=7, max=362, avg=194.89, stdev=75.12 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 30], 5.00th=[ 52], 10.00th=[ 81], 20.00th=[ 118], 00:26:33.164 | 30.00th=[ 155], 40.00th=[ 184], 50.00th=[ 209], 60.00th=[ 232], 00:26:33.164 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 275], 95.00th=[ 284], 00:26:33.164 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 363], 00:26:33.164 | 99.99th=[ 363] 00:26:33.164 bw ( KiB/s): min=55296, max=151040, per=7.05%, avg=83532.30, stdev=25944.61, samples=20 00:26:33.164 iops : min= 216, max= 590, avg=326.25, stdev=101.34, samples=20 00:26:33.164 lat (msec) : 10=0.09%, 20=0.24%, 50=4.45%, 100=9.74%, 250=57.64% 00:26:33.164 lat (msec) : 500=27.84% 00:26:33.164 cpu : usr=0.81%, sys=1.07%, ctx=1351, majf=0, minf=1 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,3326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 job8: (groupid=0, jobs=1): err= 0: pid=391579: Mon Dec 16 22:32:22 2024 00:26:33.164 write: IOPS=282, BW=70.5MiB/s (73.9MB/s)(713MiB/10107msec); 0 zone resets 00:26:33.164 slat (usec): min=29, max=49864, avg=2754.42, stdev=6090.77 00:26:33.164 clat (msec): min=10, max=331, avg=224.05, stdev=64.71 00:26:33.164 lat (msec): min=10, max=331, avg=226.80, stdev=65.77 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 28], 5.00th=[ 84], 10.00th=[ 136], 20.00th=[ 182], 00:26:33.164 | 30.00th=[ 207], 40.00th=[ 228], 50.00th=[ 241], 60.00th=[ 251], 00:26:33.164 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 300], 00:26:33.164 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 334], 00:26:33.164 | 99.99th=[ 334] 00:26:33.164 bw ( KiB/s): min=57344, max=109568, per=6.03%, avg=71375.25, stdev=13194.10, samples=20 00:26:33.164 iops : min= 224, max= 428, avg=278.75, stdev=51.52, samples=20 00:26:33.164 lat (msec) : 20=0.35%, 50=3.37%, 100=3.02%, 250=51.77%, 500=41.49% 00:26:33.164 cpu : usr=0.67%, sys=0.97%, ctx=1280, majf=0, minf=1 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,2851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 job9: (groupid=0, jobs=1): err= 0: pid=391580: Mon Dec 16 22:32:22 2024 00:26:33.164 write: IOPS=349, BW=87.4MiB/s (91.7MB/s)(884MiB/10105msec); 0 zone resets 00:26:33.164 slat (usec): min=18, max=134465, avg=2506.43, stdev=7183.47 00:26:33.164 clat (usec): min=1189, max=484073, avg=180383.85, stdev=94897.25 00:26:33.164 lat (usec): min=1241, max=484129, avg=182890.27, stdev=96178.59 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 5], 5.00th=[ 41], 10.00th=[ 60], 20.00th=[ 97], 00:26:33.164 | 30.00th=[ 110], 40.00th=[ 132], 50.00th=[ 182], 60.00th=[ 222], 00:26:33.164 | 70.00th=[ 249], 80.00th=[ 271], 90.00th=[ 305], 95.00th=[ 330], 00:26:33.164 | 99.00th=[ 368], 99.50th=[ 405], 99.90th=[ 464], 99.95th=[ 464], 00:26:33.164 | 99.99th=[ 485] 00:26:33.164 bw ( KiB/s): min=51200, max=214016, per=7.50%, avg=88833.45, stdev=44951.58, samples=20 00:26:33.164 iops : min= 200, max= 836, avg=346.95, stdev=175.60, samples=20 00:26:33.164 lat (msec) : 2=0.06%, 4=0.82%, 10=1.08%, 20=0.91%, 50=3.54% 00:26:33.164 lat (msec) : 100=15.25%, 250=48.56%, 500=29.80% 00:26:33.164 cpu : usr=1.07%, sys=1.22%, ctx=1324, majf=0, minf=1 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 job10: (groupid=0, jobs=1): err= 0: pid=391581: Mon Dec 16 22:32:22 2024 00:26:33.164 write: IOPS=272, BW=68.2MiB/s (71.5MB/s)(691MiB/10138msec); 0 zone resets 00:26:33.164 slat (usec): min=24, max=111542, avg=3558.36, stdev=7414.47 00:26:33.164 clat (usec): min=1524, max=446208, avg=230733.82, stdev=71502.54 00:26:33.164 lat (usec): min=1610, max=446252, avg=234292.17, stdev=72333.70 00:26:33.164 clat percentiles (msec): 00:26:33.164 | 1.00th=[ 6], 5.00th=[ 61], 10.00th=[ 157], 20.00th=[ 184], 00:26:33.164 | 30.00th=[ 213], 40.00th=[ 230], 50.00th=[ 241], 60.00th=[ 253], 00:26:33.164 | 70.00th=[ 268], 80.00th=[ 284], 90.00th=[ 305], 95.00th=[ 330], 00:26:33.164 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 426], 99.95th=[ 447], 00:26:33.164 | 99.99th=[ 447] 00:26:33.164 bw ( KiB/s): min=49250, max=110080, per=5.84%, avg=69124.90, stdev=16226.23, samples=20 00:26:33.164 iops : min= 192, max= 430, avg=270.00, stdev=63.41, samples=20 00:26:33.164 lat (msec) : 2=0.07%, 4=0.54%, 10=1.27%, 20=0.62%, 50=1.81% 00:26:33.164 lat (msec) : 100=1.95%, 250=52.06%, 500=41.68% 00:26:33.164 cpu : usr=0.71%, sys=0.96%, ctx=813, majf=0, minf=1 00:26:33.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:33.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.164 issued rwts: total=0,2764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.164 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.164 00:26:33.164 Run status group 0 (all jobs): 00:26:33.164 WRITE: bw=1157MiB/s (1213MB/s), 68.2MiB/s-199MiB/s (71.5MB/s-209MB/s), io=11.5GiB (12.3GB), run=10062-10139msec 00:26:33.164 00:26:33.164 Disk stats (read/write): 00:26:33.164 nvme0n1: ios=48/9708, merge=0/0, ticks=1514/1205366, in_queue=1206880, util=99.92% 00:26:33.164 nvme10n1: ios=45/9672, merge=0/0, ticks=2815/1209554, in_queue=1212369, util=100.00% 00:26:33.164 nvme1n1: ios=0/15760, merge=0/0, ticks=0/1214542, in_queue=1214542, util=97.53% 00:26:33.164 nvme2n1: ios=48/7705, merge=0/0, ticks=2119/1211104, in_queue=1213223, util=99.97% 00:26:33.164 nvme3n1: ios=45/9066, merge=0/0, ticks=4743/1148002, in_queue=1152745, util=99.96% 00:26:33.164 nvme4n1: ios=49/6612, merge=0/0, ticks=1012/1210989, in_queue=1212001, util=100.00% 00:26:33.165 nvme5n1: ios=41/8858, merge=0/0, ticks=751/1213683, in_queue=1214434, util=99.97% 00:26:33.165 nvme6n1: ios=40/6485, merge=0/0, ticks=2920/1188324, in_queue=1191244, util=100.00% 00:26:33.165 nvme7n1: ios=0/5509, merge=0/0, ticks=0/1209570, in_queue=1209570, util=98.74% 00:26:33.165 nvme8n1: ios=44/6889, merge=0/0, ticks=4631/1189435, in_queue=1194066, util=100.00% 00:26:33.165 nvme9n1: ios=45/5356, merge=0/0, ticks=3613/1191067, in_queue=1194680, util=100.00% 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:33.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.165 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:33.424 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:33.424 22:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.424 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:33.424 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.424 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.424 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.424 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.424 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:33.683 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.683 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:33.942 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.942 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:34.201 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.201 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.460 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.460 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.460 22:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:34.460 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:34.460 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:34.460 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.460 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.460 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:34.460 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.460 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:34.719 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.719 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:34.979 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:34.979 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.979 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:35.238 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:35.238 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.238 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.238 rmmod nvme_tcp 00:26:35.238 rmmod nvme_fabrics 00:26:35.238 rmmod nvme_keyring 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 383988 ']' 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 383988 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 383988 ']' 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 383988 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.497 22:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383988 00:26:35.497 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.497 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.497 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383988' 00:26:35.497 killing process with pid 383988 00:26:35.497 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 383988 00:26:35.497 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 383988 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.756 22:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:38.291 00:26:38.291 real 1m10.941s 00:26:38.291 user 4m16.365s 00:26:38.291 sys 0m18.053s 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:38.291 ************************************ 00:26:38.291 END TEST nvmf_multiconnection 00:26:38.291 ************************************ 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:38.291 ************************************ 00:26:38.291 START TEST nvmf_initiator_timeout 00:26:38.291 ************************************ 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:38.291 * Looking for test storage... 00:26:38.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.291 --rc genhtml_branch_coverage=1 00:26:38.291 --rc genhtml_function_coverage=1 00:26:38.291 --rc genhtml_legend=1 00:26:38.291 --rc geninfo_all_blocks=1 00:26:38.291 --rc geninfo_unexecuted_blocks=1 00:26:38.291 00:26:38.291 ' 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.291 --rc genhtml_branch_coverage=1 00:26:38.291 --rc genhtml_function_coverage=1 00:26:38.291 --rc genhtml_legend=1 00:26:38.291 --rc geninfo_all_blocks=1 00:26:38.291 --rc geninfo_unexecuted_blocks=1 00:26:38.291 00:26:38.291 ' 00:26:38.291 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.291 --rc genhtml_branch_coverage=1 00:26:38.291 --rc genhtml_function_coverage=1 00:26:38.291 --rc genhtml_legend=1 00:26:38.291 --rc geninfo_all_blocks=1 00:26:38.291 --rc geninfo_unexecuted_blocks=1 00:26:38.291 00:26:38.291 ' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:38.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.292 --rc genhtml_branch_coverage=1 00:26:38.292 --rc genhtml_function_coverage=1 00:26:38.292 --rc genhtml_legend=1 00:26:38.292 --rc geninfo_all_blocks=1 00:26:38.292 --rc geninfo_unexecuted_blocks=1 00:26:38.292 00:26:38.292 ' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.292 22:32:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.867 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:44.868 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:44.868 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:44.868 Found net devices under 0000:af:00.0: cvl_0_0 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:44.868 Found net devices under 0000:af:00.1: cvl_0_1 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:26:44.868 00:26:44.868 --- 10.0.0.2 ping statistics --- 00:26:44.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.868 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:26:44.868 00:26:44.868 --- 10.0.0.1 ping statistics --- 00:26:44.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.868 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=396903 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 396903 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 396903 ']' 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.868 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 [2024-12-16 22:32:33.677265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:44.869 [2024-12-16 22:32:33.677314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.869 [2024-12-16 22:32:33.755651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.869 [2024-12-16 22:32:33.778470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.869 [2024-12-16 22:32:33.778506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.869 [2024-12-16 22:32:33.778513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.869 [2024-12-16 22:32:33.778519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.869 [2024-12-16 22:32:33.778524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.869 [2024-12-16 22:32:33.779974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.869 [2024-12-16 22:32:33.780082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.869 [2024-12-16 22:32:33.780178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.869 [2024-12-16 22:32:33.780178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 Malloc0 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 Delay0 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 [2024-12-16 22:32:33.954663] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.869 [2024-12-16 22:32:33.987841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.869 22:32:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:45.806 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:45.806 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:45.806 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:45.806 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:45.806 22:32:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=397383 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:47.709 22:32:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:47.709 [global] 00:26:47.709 thread=1 00:26:47.709 invalidate=1 00:26:47.709 rw=write 00:26:47.709 time_based=1 00:26:47.709 runtime=60 00:26:47.709 ioengine=libaio 00:26:47.709 direct=1 00:26:47.709 bs=4096 00:26:47.709 iodepth=1 00:26:47.709 norandommap=0 00:26:47.709 numjobs=1 00:26:47.709 00:26:47.709 verify_dump=1 00:26:47.709 verify_backlog=512 00:26:47.709 verify_state_save=0 00:26:47.709 do_verify=1 00:26:47.709 verify=crc32c-intel 00:26:47.709 [job0] 00:26:47.709 filename=/dev/nvme0n1 00:26:47.709 Could not set queue depth (nvme0n1) 00:26:47.968 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:47.968 fio-3.35 00:26:47.968 Starting 1 thread 00:26:50.503 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.762 true 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.762 true 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.762 true 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.762 true 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.762 22:32:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:54.052 true 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:54.052 true 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:54.052 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:54.053 true 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:54.053 true 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:54.053 22:32:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 397383 00:27:50.749 00:27:50.749 job0: (groupid=0, jobs=1): err= 0: pid=397529: Mon Dec 16 22:33:37 2024 00:27:50.749 read: IOPS=45, BW=183KiB/s (188kB/s)(10.7MiB/60019msec) 00:27:50.749 slat (usec): min=3, max=11629, avg=16.47, stdev=246.45 00:27:50.749 clat (usec): min=205, max=41341k, avg=21595.26, stdev=788650.10 00:27:50.749 lat (usec): min=213, max=41341k, avg=21611.73, stdev=788650.47 00:27:50.749 clat percentiles (usec): 00:27:50.749 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 235], 00:27:50.749 | 20.00th=[ 239], 30.00th=[ 243], 40.00th=[ 247], 00:27:50.749 | 50.00th=[ 251], 60.00th=[ 255], 70.00th=[ 262], 00:27:50.749 | 80.00th=[ 273], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:50.749 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:27:50.749 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:50.749 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60019msec); 0 zone resets 00:27:50.749 slat (nsec): min=9970, max=61816, avg=11364.31, stdev=2107.79 00:27:50.749 clat (usec): min=149, max=391, avg=187.31, stdev=16.14 00:27:50.749 lat (usec): min=160, max=436, avg=198.68, stdev=16.50 00:27:50.749 clat percentiles (usec): 00:27:50.749 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:27:50.749 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 190], 00:27:50.749 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:27:50.749 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 388], 00:27:50.749 | 99.99th=[ 392] 00:27:50.749 bw ( KiB/s): min= 2000, max= 9352, per=100.00%, avg=6144.00, stdev=3677.07, samples=4 00:27:50.749 iops : min= 500, max= 2338, avg=1536.00, stdev=919.27, samples=4 00:27:50.749 lat (usec) : 250=74.86%, 500=17.80% 00:27:50.750 lat (msec) : 10=0.02%, 50=7.30%, >=2000=0.02% 00:27:50.750 cpu : usr=0.13%, sys=0.13%, ctx=5823, majf=0, minf=1 00:27:50.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.750 issued rwts: total=2748,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:50.750 00:27:50.750 Run status group 0 (all jobs): 00:27:50.750 READ: bw=183KiB/s (188kB/s), 183KiB/s-183KiB/s (188kB/s-188kB/s), io=10.7MiB (11.3MB), run=60019-60019msec 00:27:50.750 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60019-60019msec 00:27:50.750 00:27:50.750 Disk stats (read/write): 00:27:50.750 nvme0n1: ios=2844/3072, merge=0/0, ticks=17926/537, in_queue=18463, util=99.79% 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:50.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:50.750 nvmf hotplug test: fio successful as expected 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.750 rmmod nvme_tcp 00:27:50.750 rmmod nvme_fabrics 00:27:50.750 rmmod nvme_keyring 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 396903 ']' 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 396903 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 396903 ']' 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 396903 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396903 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396903' 00:27:50.750 killing process with pid 396903 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 396903 00:27:50.750 22:33:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 396903 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.750 22:33:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.750 00:27:50.750 real 1m12.596s 00:27:50.750 user 4m22.706s 00:27:50.750 sys 0m6.242s 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.750 ************************************ 00:27:50.750 END TEST nvmf_initiator_timeout 00:27:50.750 ************************************ 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.750 22:33:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:57.321 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:57.321 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:57.321 Found net devices under 0000:af:00.0: cvl_0_0 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.321 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:57.322 Found net devices under 0000:af:00.1: cvl_0_1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:57.322 ************************************ 00:27:57.322 START TEST nvmf_perf_adq 00:27:57.322 ************************************ 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:57.322 * Looking for test storage... 00:27:57.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.322 --rc genhtml_branch_coverage=1 00:27:57.322 --rc genhtml_function_coverage=1 00:27:57.322 --rc genhtml_legend=1 00:27:57.322 --rc geninfo_all_blocks=1 00:27:57.322 --rc geninfo_unexecuted_blocks=1 00:27:57.322 00:27:57.322 ' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.322 --rc genhtml_branch_coverage=1 00:27:57.322 --rc genhtml_function_coverage=1 00:27:57.322 --rc genhtml_legend=1 00:27:57.322 --rc geninfo_all_blocks=1 00:27:57.322 --rc geninfo_unexecuted_blocks=1 00:27:57.322 00:27:57.322 ' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.322 --rc genhtml_branch_coverage=1 00:27:57.322 --rc genhtml_function_coverage=1 00:27:57.322 --rc genhtml_legend=1 00:27:57.322 --rc geninfo_all_blocks=1 00:27:57.322 --rc geninfo_unexecuted_blocks=1 00:27:57.322 00:27:57.322 ' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:57.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.322 --rc genhtml_branch_coverage=1 00:27:57.322 --rc genhtml_function_coverage=1 00:27:57.322 --rc genhtml_legend=1 00:27:57.322 --rc geninfo_all_blocks=1 00:27:57.322 --rc geninfo_unexecuted_blocks=1 00:27:57.322 00:27:57.322 ' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.322 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:57.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.323 22:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:02.595 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:02.595 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:02.595 Found net devices under 0000:af:00.0: cvl_0_0 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:02.595 Found net devices under 0000:af:00.1: cvl_0_1 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:02.595 22:33:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:03.532 22:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:06.817 22:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:12.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:12.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:12.091 Found net devices under 0000:af:00.0: cvl_0_0 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.091 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:12.091 Found net devices under 0000:af:00.1: cvl_0_1 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.811 ms 00:28:12.092 00:28:12.092 --- 10.0.0.2 ping statistics --- 00:28:12.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.092 rtt min/avg/max/mdev = 0.811/0.811/0.811/0.000 ms 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:28:12.092 00:28:12.092 --- 10.0.0.1 ping statistics --- 00:28:12.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.092 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=415836 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 415836 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 415836 ']' 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.092 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.350 [2024-12-16 22:34:01.815818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:12.350 [2024-12-16 22:34:01.815870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.350 [2024-12-16 22:34:01.895541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.350 [2024-12-16 22:34:01.918602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.350 [2024-12-16 22:34:01.918640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.350 [2024-12-16 22:34:01.918647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.350 [2024-12-16 22:34:01.918653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.350 [2024-12-16 22:34:01.918658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.350 [2024-12-16 22:34:01.920063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.350 [2024-12-16 22:34:01.920170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.350 [2024-12-16 22:34:01.920278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.350 [2024-12-16 22:34:01.920278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:12.350 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:12.351 22:34:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.351 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.609 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.610 [2024-12-16 22:34:02.128356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.610 Malloc1 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.610 [2024-12-16 22:34:02.195669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=416003 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:12.610 22:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:14.514 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:14.514 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.514 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:14.773 "tick_rate": 2100000000, 00:28:14.773 "poll_groups": [ 00:28:14.773 { 00:28:14.773 "name": "nvmf_tgt_poll_group_000", 00:28:14.773 "admin_qpairs": 1, 00:28:14.773 "io_qpairs": 1, 00:28:14.773 "current_admin_qpairs": 1, 00:28:14.773 "current_io_qpairs": 1, 00:28:14.773 "pending_bdev_io": 0, 00:28:14.773 "completed_nvme_io": 19181, 00:28:14.773 "transports": [ 00:28:14.773 { 00:28:14.773 "trtype": "TCP" 00:28:14.773 } 00:28:14.773 ] 00:28:14.773 }, 00:28:14.773 { 00:28:14.773 "name": "nvmf_tgt_poll_group_001", 00:28:14.773 "admin_qpairs": 0, 00:28:14.773 "io_qpairs": 1, 00:28:14.773 "current_admin_qpairs": 0, 00:28:14.773 "current_io_qpairs": 1, 00:28:14.773 "pending_bdev_io": 0, 00:28:14.773 "completed_nvme_io": 19283, 00:28:14.773 "transports": [ 00:28:14.773 { 00:28:14.773 "trtype": "TCP" 00:28:14.773 } 00:28:14.773 ] 00:28:14.773 }, 00:28:14.773 { 00:28:14.773 "name": "nvmf_tgt_poll_group_002", 00:28:14.773 "admin_qpairs": 0, 00:28:14.773 "io_qpairs": 1, 00:28:14.773 "current_admin_qpairs": 0, 00:28:14.773 "current_io_qpairs": 1, 00:28:14.773 "pending_bdev_io": 0, 00:28:14.773 "completed_nvme_io": 19524, 00:28:14.773 "transports": [ 00:28:14.773 { 00:28:14.773 "trtype": "TCP" 00:28:14.773 } 00:28:14.773 ] 00:28:14.773 }, 00:28:14.773 { 00:28:14.773 "name": "nvmf_tgt_poll_group_003", 00:28:14.773 "admin_qpairs": 0, 00:28:14.773 "io_qpairs": 1, 00:28:14.773 "current_admin_qpairs": 0, 00:28:14.773 "current_io_qpairs": 1, 00:28:14.773 "pending_bdev_io": 0, 00:28:14.773 "completed_nvme_io": 19079, 00:28:14.773 "transports": [ 00:28:14.773 { 00:28:14.773 "trtype": "TCP" 00:28:14.773 } 00:28:14.773 ] 00:28:14.773 } 00:28:14.773 ] 00:28:14.773 }' 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:14.773 22:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 416003 00:28:22.890 Initializing NVMe Controllers 00:28:22.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:22.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:22.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:22.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:22.890 Initialization complete. Launching workers. 00:28:22.890 ======================================================== 00:28:22.890 Latency(us) 00:28:22.890 Device Information : IOPS MiB/s Average min max 00:28:22.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10105.89 39.48 6331.86 1992.96 10451.81 00:28:22.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10253.99 40.05 6241.18 1556.33 11237.00 00:28:22.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10424.39 40.72 6138.29 2259.93 10435.53 00:28:22.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10166.39 39.71 6295.70 2467.32 10827.67 00:28:22.890 ======================================================== 00:28:22.890 Total : 40950.68 159.96 6250.90 1556.33 11237.00 00:28:22.890 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.890 rmmod nvme_tcp 00:28:22.890 rmmod nvme_fabrics 00:28:22.890 rmmod nvme_keyring 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 415836 ']' 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 415836 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 415836 ']' 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 415836 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415836 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415836' 00:28:22.890 killing process with pid 415836 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 415836 00:28:22.890 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 415836 00:28:23.149 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.150 22:34:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.054 22:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.054 22:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:25.054 22:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:25.054 22:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:26.429 22:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:28.966 22:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.240 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.240 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.240 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.240 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.240 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:28:34.241 00:28:34.241 --- 10.0.0.2 ping statistics --- 00:28:34.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.241 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:34.241 00:28:34.241 --- 10.0.0.1 ping statistics --- 00:28:34.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.241 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:34.241 net.core.busy_poll = 1 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:34.241 net.core.busy_read = 1 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:34.241 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:34.500 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:34.500 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:34.500 22:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=419813 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 419813 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 419813 ']' 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.500 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.500 [2024-12-16 22:34:24.083537] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:34.501 [2024-12-16 22:34:24.083581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.501 [2024-12-16 22:34:24.162440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.501 [2024-12-16 22:34:24.184581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.501 [2024-12-16 22:34:24.184620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.501 [2024-12-16 22:34:24.184627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.501 [2024-12-16 22:34:24.184633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.501 [2024-12-16 22:34:24.184638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.501 [2024-12-16 22:34:24.185907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.501 [2024-12-16 22:34:24.186018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.501 [2024-12-16 22:34:24.186126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.501 [2024-12-16 22:34:24.186126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 [2024-12-16 22:34:24.402981] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 Malloc1 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.760 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.018 [2024-12-16 22:34:24.468910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=419865 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:35.018 22:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:36.914 "tick_rate": 2100000000, 00:28:36.914 "poll_groups": [ 00:28:36.914 { 00:28:36.914 "name": "nvmf_tgt_poll_group_000", 00:28:36.914 "admin_qpairs": 1, 00:28:36.914 "io_qpairs": 2, 00:28:36.914 "current_admin_qpairs": 1, 00:28:36.914 "current_io_qpairs": 2, 00:28:36.914 "pending_bdev_io": 0, 00:28:36.914 "completed_nvme_io": 28439, 00:28:36.914 "transports": [ 00:28:36.914 { 00:28:36.914 "trtype": "TCP" 00:28:36.914 } 00:28:36.914 ] 00:28:36.914 }, 00:28:36.914 { 00:28:36.914 "name": "nvmf_tgt_poll_group_001", 00:28:36.914 "admin_qpairs": 0, 00:28:36.914 "io_qpairs": 2, 00:28:36.914 "current_admin_qpairs": 0, 00:28:36.914 "current_io_qpairs": 2, 00:28:36.914 "pending_bdev_io": 0, 00:28:36.914 "completed_nvme_io": 28406, 00:28:36.914 "transports": [ 00:28:36.914 { 00:28:36.914 "trtype": "TCP" 00:28:36.914 } 00:28:36.914 ] 00:28:36.914 }, 00:28:36.914 { 00:28:36.914 "name": "nvmf_tgt_poll_group_002", 00:28:36.914 "admin_qpairs": 0, 00:28:36.914 "io_qpairs": 0, 00:28:36.914 "current_admin_qpairs": 0, 00:28:36.914 "current_io_qpairs": 0, 00:28:36.914 "pending_bdev_io": 0, 00:28:36.914 "completed_nvme_io": 0, 00:28:36.914 "transports": [ 00:28:36.914 { 00:28:36.914 "trtype": "TCP" 00:28:36.914 } 00:28:36.914 ] 00:28:36.914 }, 00:28:36.914 { 00:28:36.914 "name": "nvmf_tgt_poll_group_003", 00:28:36.914 "admin_qpairs": 0, 00:28:36.914 "io_qpairs": 0, 00:28:36.914 "current_admin_qpairs": 0, 00:28:36.914 "current_io_qpairs": 0, 00:28:36.914 "pending_bdev_io": 0, 00:28:36.914 "completed_nvme_io": 0, 00:28:36.914 "transports": [ 00:28:36.914 { 00:28:36.914 "trtype": "TCP" 00:28:36.914 } 00:28:36.914 ] 00:28:36.914 } 00:28:36.914 ] 00:28:36.914 }' 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:36.914 22:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 419865 00:28:45.016 Initializing NVMe Controllers 00:28:45.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:45.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:45.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:45.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:45.016 Initialization complete. Launching workers. 00:28:45.017 ======================================================== 00:28:45.017 Latency(us) 00:28:45.017 Device Information : IOPS MiB/s Average min max 00:28:45.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8154.50 31.85 7848.65 1486.83 52821.36 00:28:45.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7032.90 27.47 9100.79 1471.39 52494.17 00:28:45.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7217.90 28.19 8866.18 1489.86 53929.83 00:28:45.017 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8028.20 31.36 7970.97 1043.10 52908.10 00:28:45.017 ======================================================== 00:28:45.017 Total : 30433.50 118.88 8411.60 1043.10 53929.83 00:28:45.017 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.017 rmmod nvme_tcp 00:28:45.017 rmmod nvme_fabrics 00:28:45.017 rmmod nvme_keyring 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 419813 ']' 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 419813 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 419813 ']' 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 419813 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.017 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419813 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419813' 00:28:45.276 killing process with pid 419813 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 419813 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 419813 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.276 22:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:48.564 00:28:48.564 real 0m52.238s 00:28:48.564 user 2m43.989s 00:28:48.564 sys 0m11.161s 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.564 ************************************ 00:28:48.564 END TEST nvmf_perf_adq 00:28:48.564 ************************************ 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:48.564 ************************************ 00:28:48.564 START TEST nvmf_shutdown 00:28:48.564 ************************************ 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:48.564 * Looking for test storage... 00:28:48.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.564 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.824 --rc genhtml_branch_coverage=1 00:28:48.824 --rc genhtml_function_coverage=1 00:28:48.824 --rc genhtml_legend=1 00:28:48.824 --rc geninfo_all_blocks=1 00:28:48.824 --rc geninfo_unexecuted_blocks=1 00:28:48.824 00:28:48.824 ' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.824 --rc genhtml_branch_coverage=1 00:28:48.824 --rc genhtml_function_coverage=1 00:28:48.824 --rc genhtml_legend=1 00:28:48.824 --rc geninfo_all_blocks=1 00:28:48.824 --rc geninfo_unexecuted_blocks=1 00:28:48.824 00:28:48.824 ' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.824 --rc genhtml_branch_coverage=1 00:28:48.824 --rc genhtml_function_coverage=1 00:28:48.824 --rc genhtml_legend=1 00:28:48.824 --rc geninfo_all_blocks=1 00:28:48.824 --rc geninfo_unexecuted_blocks=1 00:28:48.824 00:28:48.824 ' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.824 --rc genhtml_branch_coverage=1 00:28:48.824 --rc genhtml_function_coverage=1 00:28:48.824 --rc genhtml_legend=1 00:28:48.824 --rc geninfo_all_blocks=1 00:28:48.824 --rc geninfo_unexecuted_blocks=1 00:28:48.824 00:28:48.824 ' 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.824 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:48.825 ************************************ 00:28:48.825 START TEST nvmf_shutdown_tc1 00:28:48.825 ************************************ 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.825 22:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:55.396 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:55.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:55.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:55.397 Found net devices under 0000:af:00.0: cvl_0_0 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:55.397 Found net devices under 0000:af:00.1: cvl_0_1 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.397 22:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:28:55.397 00:28:55.397 --- 10.0.0.2 ping statistics --- 00:28:55.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.397 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:28:55.397 00:28:55.397 --- 10.0.0.1 ping statistics --- 00:28:55.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.397 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.397 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=425184 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 425184 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425184 ']' 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 [2024-12-16 22:34:44.313834] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:55.398 [2024-12-16 22:34:44.313879] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.398 [2024-12-16 22:34:44.391663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.398 [2024-12-16 22:34:44.414417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.398 [2024-12-16 22:34:44.414452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.398 [2024-12-16 22:34:44.414459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.398 [2024-12-16 22:34:44.414465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.398 [2024-12-16 22:34:44.414471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.398 [2024-12-16 22:34:44.415942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.398 [2024-12-16 22:34:44.416053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.398 [2024-12-16 22:34:44.416160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.398 [2024-12-16 22:34:44.416161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 [2024-12-16 22:34:44.544723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.398 22:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 Malloc1 00:28:55.398 [2024-12-16 22:34:44.655385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.398 Malloc2 00:28:55.398 Malloc3 00:28:55.398 Malloc4 00:28:55.398 Malloc5 00:28:55.398 Malloc6 00:28:55.398 Malloc7 00:28:55.398 Malloc8 00:28:55.398 Malloc9 00:28:55.398 Malloc10 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=425453 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 425453 /var/tmp/bdevperf.sock 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425453 ']' 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.398 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.398 { 00:28:55.398 "params": { 00:28:55.398 "name": "Nvme$subsystem", 00:28:55.398 "trtype": "$TEST_TRANSPORT", 00:28:55.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.398 "adrfam": "ipv4", 00:28:55.398 "trsvcid": "$NVMF_PORT", 00:28:55.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.398 "hdgst": ${hdgst:-false}, 00:28:55.398 "ddgst": ${ddgst:-false} 00:28:55.398 }, 00:28:55.398 "method": "bdev_nvme_attach_controller" 00:28:55.398 } 00:28:55.398 EOF 00:28:55.399 )") 00:28:55.399 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.399 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.399 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.399 { 00:28:55.399 "params": { 00:28:55.399 "name": "Nvme$subsystem", 00:28:55.399 "trtype": "$TEST_TRANSPORT", 00:28:55.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.399 "adrfam": "ipv4", 00:28:55.399 "trsvcid": "$NVMF_PORT", 00:28:55.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.399 "hdgst": ${hdgst:-false}, 00:28:55.399 "ddgst": ${ddgst:-false} 00:28:55.399 }, 00:28:55.399 "method": "bdev_nvme_attach_controller" 00:28:55.399 } 00:28:55.399 EOF 00:28:55.399 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 [2024-12-16 22:34:45.128916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:55.657 [2024-12-16 22:34:45.128963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:55.657 { 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme$subsystem", 00:28:55.657 "trtype": "$TEST_TRANSPORT", 00:28:55.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "$NVMF_PORT", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.657 "hdgst": ${hdgst:-false}, 00:28:55.657 "ddgst": ${ddgst:-false} 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 } 00:28:55.657 EOF 00:28:55.657 )") 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:55.657 22:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme1", 00:28:55.657 "trtype": "tcp", 00:28:55.657 "traddr": "10.0.0.2", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "4420", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.657 "hdgst": false, 00:28:55.657 "ddgst": false 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 },{ 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme2", 00:28:55.657 "trtype": "tcp", 00:28:55.657 "traddr": "10.0.0.2", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "4420", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:55.657 "hdgst": false, 00:28:55.657 "ddgst": false 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 },{ 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme3", 00:28:55.657 "trtype": "tcp", 00:28:55.657 "traddr": "10.0.0.2", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "4420", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:55.657 "hdgst": false, 00:28:55.657 "ddgst": false 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 },{ 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme4", 00:28:55.657 "trtype": "tcp", 00:28:55.657 "traddr": "10.0.0.2", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "4420", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:55.657 "hdgst": false, 00:28:55.657 "ddgst": false 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 },{ 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme5", 00:28:55.657 "trtype": "tcp", 00:28:55.657 "traddr": "10.0.0.2", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "4420", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:55.657 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:55.657 "hdgst": false, 00:28:55.657 "ddgst": false 00:28:55.657 }, 00:28:55.657 "method": "bdev_nvme_attach_controller" 00:28:55.657 },{ 00:28:55.657 "params": { 00:28:55.657 "name": "Nvme6", 00:28:55.657 "trtype": "tcp", 00:28:55.657 "traddr": "10.0.0.2", 00:28:55.657 "adrfam": "ipv4", 00:28:55.657 "trsvcid": "4420", 00:28:55.657 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:55.658 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:55.658 "hdgst": false, 00:28:55.658 "ddgst": false 00:28:55.658 }, 00:28:55.658 "method": "bdev_nvme_attach_controller" 00:28:55.658 },{ 00:28:55.658 "params": { 00:28:55.658 "name": "Nvme7", 00:28:55.658 "trtype": "tcp", 00:28:55.658 "traddr": "10.0.0.2", 00:28:55.658 "adrfam": "ipv4", 00:28:55.658 "trsvcid": "4420", 00:28:55.658 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:55.658 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:55.658 "hdgst": false, 00:28:55.658 "ddgst": false 00:28:55.658 }, 00:28:55.658 "method": "bdev_nvme_attach_controller" 00:28:55.658 },{ 00:28:55.658 "params": { 00:28:55.658 "name": "Nvme8", 00:28:55.658 "trtype": "tcp", 00:28:55.658 "traddr": "10.0.0.2", 00:28:55.658 "adrfam": "ipv4", 00:28:55.658 "trsvcid": "4420", 00:28:55.658 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:55.658 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:55.658 "hdgst": false, 00:28:55.658 "ddgst": false 00:28:55.658 }, 00:28:55.658 "method": "bdev_nvme_attach_controller" 00:28:55.658 },{ 00:28:55.658 "params": { 00:28:55.658 "name": "Nvme9", 00:28:55.658 "trtype": "tcp", 00:28:55.658 "traddr": "10.0.0.2", 00:28:55.658 "adrfam": "ipv4", 00:28:55.658 "trsvcid": "4420", 00:28:55.658 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:55.658 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:55.658 "hdgst": false, 00:28:55.658 "ddgst": false 00:28:55.658 }, 00:28:55.658 "method": "bdev_nvme_attach_controller" 00:28:55.658 },{ 00:28:55.658 "params": { 00:28:55.658 "name": "Nvme10", 00:28:55.658 "trtype": "tcp", 00:28:55.658 "traddr": "10.0.0.2", 00:28:55.658 "adrfam": "ipv4", 00:28:55.658 "trsvcid": "4420", 00:28:55.658 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:55.658 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:55.658 "hdgst": false, 00:28:55.658 "ddgst": false 00:28:55.658 }, 00:28:55.658 "method": "bdev_nvme_attach_controller" 00:28:55.658 }' 00:28:55.658 [2024-12-16 22:34:45.202268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.658 [2024-12-16 22:34:45.224593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.558 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.558 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:57.558 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:57.558 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.558 22:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:57.558 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.558 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 425453 00:28:57.558 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:57.558 22:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:58.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 425453 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 425184 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.494 { 00:28:58.494 "params": { 00:28:58.494 "name": "Nvme$subsystem", 00:28:58.494 "trtype": "$TEST_TRANSPORT", 00:28:58.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.494 "adrfam": "ipv4", 00:28:58.494 "trsvcid": "$NVMF_PORT", 00:28:58.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.494 "hdgst": ${hdgst:-false}, 00:28:58.494 "ddgst": ${ddgst:-false} 00:28:58.494 }, 00:28:58.494 "method": "bdev_nvme_attach_controller" 00:28:58.494 } 00:28:58.494 EOF 00:28:58.494 )") 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.494 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.494 { 00:28:58.494 "params": { 00:28:58.494 "name": "Nvme$subsystem", 00:28:58.494 "trtype": "$TEST_TRANSPORT", 00:28:58.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.494 "adrfam": "ipv4", 00:28:58.494 "trsvcid": "$NVMF_PORT", 00:28:58.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.494 "hdgst": ${hdgst:-false}, 00:28:58.494 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 [2024-12-16 22:34:48.055567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:58.495 [2024-12-16 22:34:48.055614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425930 ] 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.495 { 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme$subsystem", 00:28:58.495 "trtype": "$TEST_TRANSPORT", 00:28:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "$NVMF_PORT", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.495 "hdgst": ${hdgst:-false}, 00:28:58.495 "ddgst": ${ddgst:-false} 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 } 00:28:58.495 EOF 00:28:58.495 )") 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:58.495 22:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme1", 00:28:58.495 "trtype": "tcp", 00:28:58.495 "traddr": "10.0.0.2", 00:28:58.495 "adrfam": "ipv4", 00:28:58.495 "trsvcid": "4420", 00:28:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.495 "hdgst": false, 00:28:58.495 "ddgst": false 00:28:58.495 }, 00:28:58.495 "method": "bdev_nvme_attach_controller" 00:28:58.495 },{ 00:28:58.495 "params": { 00:28:58.495 "name": "Nvme2", 00:28:58.495 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme3", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme4", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme5", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme6", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme7", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme8", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme9", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 },{ 00:28:58.496 "params": { 00:28:58.496 "name": "Nvme10", 00:28:58.496 "trtype": "tcp", 00:28:58.496 "traddr": "10.0.0.2", 00:28:58.496 "adrfam": "ipv4", 00:28:58.496 "trsvcid": "4420", 00:28:58.496 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:58.496 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:58.496 "hdgst": false, 00:28:58.496 "ddgst": false 00:28:58.496 }, 00:28:58.496 "method": "bdev_nvme_attach_controller" 00:28:58.496 }' 00:28:58.496 [2024-12-16 22:34:48.132186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.496 [2024-12-16 22:34:48.154694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.400 Running I/O for 1 seconds... 00:29:01.226 1929.00 IOPS, 120.56 MiB/s 00:29:01.226 Latency(us) 00:29:01.226 [2024-12-16T21:34:50.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.226 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme1n1 : 1.14 224.80 14.05 0.00 0.00 281969.13 18974.23 255652.82 00:29:01.226 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme2n1 : 1.10 255.52 15.97 0.00 0.00 237198.30 10173.68 226692.14 00:29:01.226 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme3n1 : 1.10 232.89 14.56 0.00 0.00 263368.90 22968.81 252656.88 00:29:01.226 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme4n1 : 1.10 233.20 14.58 0.00 0.00 260034.80 14542.75 253655.53 00:29:01.226 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme5n1 : 1.17 272.92 17.06 0.00 0.00 217952.26 11921.31 247663.66 00:29:01.226 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme6n1 : 1.13 226.10 14.13 0.00 0.00 260943.97 18100.42 252656.88 00:29:01.226 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme7n1 : 1.17 272.75 17.05 0.00 0.00 213489.08 14417.92 263641.97 00:29:01.226 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme8n1 : 1.18 271.36 16.96 0.00 0.00 211895.34 10922.67 249660.95 00:29:01.226 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme9n1 : 1.17 219.66 13.73 0.00 0.00 257589.39 23343.30 275625.69 00:29:01.226 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:01.226 Verification LBA range: start 0x0 length 0x400 00:29:01.226 Nvme10n1 : 1.18 270.56 16.91 0.00 0.00 206521.10 8738.13 251658.24 00:29:01.226 [2024-12-16T21:34:50.927Z] =================================================================================================================== 00:29:01.226 [2024-12-16T21:34:50.927Z] Total : 2479.77 154.99 0.00 0.00 238482.39 8738.13 275625.69 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.486 rmmod nvme_tcp 00:29:01.486 rmmod nvme_fabrics 00:29:01.486 rmmod nvme_keyring 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 425184 ']' 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 425184 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 425184 ']' 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 425184 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425184 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425184' 00:29:01.486 killing process with pid 425184 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 425184 00:29:01.486 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 425184 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.054 22:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.959 00:29:03.959 real 0m15.200s 00:29:03.959 user 0m34.195s 00:29:03.959 sys 0m5.722s 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.959 ************************************ 00:29:03.959 END TEST nvmf_shutdown_tc1 00:29:03.959 ************************************ 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:03.959 ************************************ 00:29:03.959 START TEST nvmf_shutdown_tc2 00:29:03.959 ************************************ 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.959 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:03.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:03.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:03.960 Found net devices under 0000:af:00.0: cvl_0_0 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:03.960 Found net devices under 0000:af:00.1: cvl_0_1 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.960 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.219 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.478 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.478 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.478 22:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:29:04.478 00:29:04.478 --- 10.0.0.2 ping statistics --- 00:29:04.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.478 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:04.478 00:29:04.478 --- 10.0.0.1 ping statistics --- 00:29:04.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.478 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=426939 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 426939 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426939 ']' 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.478 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.479 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.479 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.479 [2024-12-16 22:34:54.112435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:04.479 [2024-12-16 22:34:54.112477] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.738 [2024-12-16 22:34:54.189546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.738 [2024-12-16 22:34:54.212358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.738 [2024-12-16 22:34:54.212394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.738 [2024-12-16 22:34:54.212401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.738 [2024-12-16 22:34:54.212407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.738 [2024-12-16 22:34:54.212413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.738 [2024-12-16 22:34:54.213775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.738 [2024-12-16 22:34:54.213880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.738 [2024-12-16 22:34:54.213985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.738 [2024-12-16 22:34:54.213987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.738 [2024-12-16 22:34:54.350211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.738 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.739 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.997 Malloc1 00:29:04.997 [2024-12-16 22:34:54.462850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.997 Malloc2 00:29:04.997 Malloc3 00:29:04.997 Malloc4 00:29:04.997 Malloc5 00:29:04.997 Malloc6 00:29:04.997 Malloc7 00:29:05.257 Malloc8 00:29:05.257 Malloc9 00:29:05.257 Malloc10 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=427198 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 427198 /var/tmp/bdevperf.sock 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 427198 ']' 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.257 { 00:29:05.257 "params": { 00:29:05.257 "name": "Nvme$subsystem", 00:29:05.257 "trtype": "$TEST_TRANSPORT", 00:29:05.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.257 "adrfam": "ipv4", 00:29:05.257 "trsvcid": "$NVMF_PORT", 00:29:05.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.257 "hdgst": ${hdgst:-false}, 00:29:05.257 "ddgst": ${ddgst:-false} 00:29:05.257 }, 00:29:05.257 "method": "bdev_nvme_attach_controller" 00:29:05.257 } 00:29:05.257 EOF 00:29:05.257 )") 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.257 { 00:29:05.257 "params": { 00:29:05.257 "name": "Nvme$subsystem", 00:29:05.257 "trtype": "$TEST_TRANSPORT", 00:29:05.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.257 "adrfam": "ipv4", 00:29:05.257 "trsvcid": "$NVMF_PORT", 00:29:05.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.257 "hdgst": ${hdgst:-false}, 00:29:05.257 "ddgst": ${ddgst:-false} 00:29:05.257 }, 00:29:05.257 "method": "bdev_nvme_attach_controller" 00:29:05.257 } 00:29:05.257 EOF 00:29:05.257 )") 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.257 { 00:29:05.257 "params": { 00:29:05.257 "name": "Nvme$subsystem", 00:29:05.257 "trtype": "$TEST_TRANSPORT", 00:29:05.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.257 "adrfam": "ipv4", 00:29:05.257 "trsvcid": "$NVMF_PORT", 00:29:05.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.257 "hdgst": ${hdgst:-false}, 00:29:05.257 "ddgst": ${ddgst:-false} 00:29:05.257 }, 00:29:05.257 "method": "bdev_nvme_attach_controller" 00:29:05.257 } 00:29:05.257 EOF 00:29:05.257 )") 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.257 { 00:29:05.257 "params": { 00:29:05.257 "name": "Nvme$subsystem", 00:29:05.257 "trtype": "$TEST_TRANSPORT", 00:29:05.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.257 "adrfam": "ipv4", 00:29:05.257 "trsvcid": "$NVMF_PORT", 00:29:05.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.257 "hdgst": ${hdgst:-false}, 00:29:05.257 "ddgst": ${ddgst:-false} 00:29:05.257 }, 00:29:05.257 "method": "bdev_nvme_attach_controller" 00:29:05.257 } 00:29:05.257 EOF 00:29:05.257 )") 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.257 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.257 { 00:29:05.257 "params": { 00:29:05.257 "name": "Nvme$subsystem", 00:29:05.257 "trtype": "$TEST_TRANSPORT", 00:29:05.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.258 "adrfam": "ipv4", 00:29:05.258 "trsvcid": "$NVMF_PORT", 00:29:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.258 "hdgst": ${hdgst:-false}, 00:29:05.258 "ddgst": ${ddgst:-false} 00:29:05.258 }, 00:29:05.258 "method": "bdev_nvme_attach_controller" 00:29:05.258 } 00:29:05.258 EOF 00:29:05.258 )") 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.258 { 00:29:05.258 "params": { 00:29:05.258 "name": "Nvme$subsystem", 00:29:05.258 "trtype": "$TEST_TRANSPORT", 00:29:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.258 "adrfam": "ipv4", 00:29:05.258 "trsvcid": "$NVMF_PORT", 00:29:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.258 "hdgst": ${hdgst:-false}, 00:29:05.258 "ddgst": ${ddgst:-false} 00:29:05.258 }, 00:29:05.258 "method": "bdev_nvme_attach_controller" 00:29:05.258 } 00:29:05.258 EOF 00:29:05.258 )") 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.258 { 00:29:05.258 "params": { 00:29:05.258 "name": "Nvme$subsystem", 00:29:05.258 "trtype": "$TEST_TRANSPORT", 00:29:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.258 "adrfam": "ipv4", 00:29:05.258 "trsvcid": "$NVMF_PORT", 00:29:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.258 "hdgst": ${hdgst:-false}, 00:29:05.258 "ddgst": ${ddgst:-false} 00:29:05.258 }, 00:29:05.258 "method": "bdev_nvme_attach_controller" 00:29:05.258 } 00:29:05.258 EOF 00:29:05.258 )") 00:29:05.258 [2024-12-16 22:34:54.938392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:05.258 [2024-12-16 22:34:54.938438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427198 ] 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.258 { 00:29:05.258 "params": { 00:29:05.258 "name": "Nvme$subsystem", 00:29:05.258 "trtype": "$TEST_TRANSPORT", 00:29:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.258 "adrfam": "ipv4", 00:29:05.258 "trsvcid": "$NVMF_PORT", 00:29:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.258 "hdgst": ${hdgst:-false}, 00:29:05.258 "ddgst": ${ddgst:-false} 00:29:05.258 }, 00:29:05.258 "method": "bdev_nvme_attach_controller" 00:29:05.258 } 00:29:05.258 EOF 00:29:05.258 )") 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.258 { 00:29:05.258 "params": { 00:29:05.258 "name": "Nvme$subsystem", 00:29:05.258 "trtype": "$TEST_TRANSPORT", 00:29:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.258 "adrfam": "ipv4", 00:29:05.258 "trsvcid": "$NVMF_PORT", 00:29:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.258 "hdgst": ${hdgst:-false}, 00:29:05.258 "ddgst": ${ddgst:-false} 00:29:05.258 }, 00:29:05.258 "method": "bdev_nvme_attach_controller" 00:29:05.258 } 00:29:05.258 EOF 00:29:05.258 )") 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.258 { 00:29:05.258 "params": { 00:29:05.258 "name": "Nvme$subsystem", 00:29:05.258 "trtype": "$TEST_TRANSPORT", 00:29:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.258 "adrfam": "ipv4", 00:29:05.258 "trsvcid": "$NVMF_PORT", 00:29:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.258 "hdgst": ${hdgst:-false}, 00:29:05.258 "ddgst": ${ddgst:-false} 00:29:05.258 }, 00:29:05.258 "method": "bdev_nvme_attach_controller" 00:29:05.258 } 00:29:05.258 EOF 00:29:05.258 )") 00:29:05.258 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:05.528 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:05.528 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:05.528 22:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.528 "params": { 00:29:05.528 "name": "Nvme1", 00:29:05.528 "trtype": "tcp", 00:29:05.528 "traddr": "10.0.0.2", 00:29:05.528 "adrfam": "ipv4", 00:29:05.528 "trsvcid": "4420", 00:29:05.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme2", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme3", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme4", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme5", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme6", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme7", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme8", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme9", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 },{ 00:29:05.529 "params": { 00:29:05.529 "name": "Nvme10", 00:29:05.529 "trtype": "tcp", 00:29:05.529 "traddr": "10.0.0.2", 00:29:05.529 "adrfam": "ipv4", 00:29:05.529 "trsvcid": "4420", 00:29:05.529 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:05.529 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:05.529 "hdgst": false, 00:29:05.529 "ddgst": false 00:29:05.529 }, 00:29:05.529 "method": "bdev_nvme_attach_controller" 00:29:05.529 }' 00:29:05.529 [2024-12-16 22:34:55.014152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.529 [2024-12-16 22:34:55.036388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.906 Running I/O for 10 seconds... 00:29:07.165 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.165 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:07.165 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:07.165 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.165 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:07.424 22:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 427198 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 427198 ']' 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 427198 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427198 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427198' 00:29:07.684 killing process with pid 427198 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 427198 00:29:07.684 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 427198 00:29:07.684 Received shutdown signal, test time was about 0.886333 seconds 00:29:07.684 00:29:07.684 Latency(us) 00:29:07.684 [2024-12-16T21:34:57.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme1n1 : 0.87 300.50 18.78 0.00 0.00 210133.20 2231.34 208716.56 00:29:07.684 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme2n1 : 0.88 290.84 18.18 0.00 0.00 213689.54 16852.11 214708.42 00:29:07.684 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme3n1 : 0.87 294.10 18.38 0.00 0.00 207378.90 13232.03 213709.78 00:29:07.684 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme4n1 : 0.86 298.62 18.66 0.00 0.00 200158.48 13419.28 210713.84 00:29:07.684 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme5n1 : 0.88 290.03 18.13 0.00 0.00 202761.26 18225.25 212711.13 00:29:07.684 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme6n1 : 0.89 289.04 18.07 0.00 0.00 199662.45 18599.74 218702.99 00:29:07.684 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme7n1 : 0.88 292.15 18.26 0.00 0.00 193429.46 15104.49 211712.49 00:29:07.684 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme8n1 : 0.87 294.83 18.43 0.00 0.00 187556.33 18849.40 213709.78 00:29:07.684 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme9n1 : 0.85 226.82 14.18 0.00 0.00 237788.40 30208.98 213709.78 00:29:07.684 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.684 Verification LBA range: start 0x0 length 0x400 00:29:07.684 Nvme10n1 : 0.85 225.37 14.09 0.00 0.00 234307.13 17476.27 230686.72 00:29:07.684 [2024-12-16T21:34:57.385Z] =================================================================================================================== 00:29:07.684 [2024-12-16T21:34:57.385Z] Total : 2802.29 175.14 0.00 0.00 207251.19 2231.34 230686.72 00:29:07.943 22:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:08.876 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 426939 00:29:08.876 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:08.876 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.877 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.877 rmmod nvme_tcp 00:29:08.877 rmmod nvme_fabrics 00:29:08.877 rmmod nvme_keyring 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 426939 ']' 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 426939 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426939 ']' 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426939 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426939 00:29:09.135 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:09.136 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:09.136 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426939' 00:29:09.136 killing process with pid 426939 00:29:09.136 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426939 00:29:09.136 22:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426939 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.395 22:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.930 00:29:11.930 real 0m7.449s 00:29:11.930 user 0m21.599s 00:29:11.930 sys 0m1.335s 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.930 ************************************ 00:29:11.930 END TEST nvmf_shutdown_tc2 00:29:11.930 ************************************ 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:11.930 ************************************ 00:29:11.930 START TEST nvmf_shutdown_tc3 00:29:11.930 ************************************ 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:11.930 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:11.931 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:11.931 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:11.931 Found net devices under 0000:af:00.0: cvl_0_0 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:11.931 Found net devices under 0000:af:00.1: cvl_0_1 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:29:11.931 00:29:11.931 --- 10.0.0.2 ping statistics --- 00:29:11.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.931 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:29:11.931 00:29:11.931 --- 10.0.0.1 ping statistics --- 00:29:11.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.931 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.931 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=428236 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 428236 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428236 ']' 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.932 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:11.932 [2024-12-16 22:35:01.529591] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:11.932 [2024-12-16 22:35:01.529635] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.932 [2024-12-16 22:35:01.605077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.932 [2024-12-16 22:35:01.627759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.932 [2024-12-16 22:35:01.627794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.932 [2024-12-16 22:35:01.627801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.932 [2024-12-16 22:35:01.627807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.932 [2024-12-16 22:35:01.627812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.932 [2024-12-16 22:35:01.629295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.932 [2024-12-16 22:35:01.629404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.932 [2024-12-16 22:35:01.629491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.932 [2024-12-16 22:35:01.629489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.191 [2024-12-16 22:35:01.768819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.191 22:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.191 Malloc1 00:29:12.191 [2024-12-16 22:35:01.872947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.450 Malloc2 00:29:12.450 Malloc3 00:29:12.450 Malloc4 00:29:12.450 Malloc5 00:29:12.450 Malloc6 00:29:12.450 Malloc7 00:29:12.709 Malloc8 00:29:12.709 Malloc9 00:29:12.709 Malloc10 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=428496 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 428496 /var/tmp/bdevperf.sock 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428496 ']' 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:12.709 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 [2024-12-16 22:35:02.344773] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:12.710 [2024-12-16 22:35:02.344820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428496 ] 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:12.710 { 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme$subsystem", 00:29:12.710 "trtype": "$TEST_TRANSPORT", 00:29:12.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "$NVMF_PORT", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.710 "hdgst": ${hdgst:-false}, 00:29:12.710 "ddgst": ${ddgst:-false} 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.710 } 00:29:12.710 EOF 00:29:12.710 )") 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:12.710 22:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:12.710 "params": { 00:29:12.710 "name": "Nvme1", 00:29:12.710 "trtype": "tcp", 00:29:12.710 "traddr": "10.0.0.2", 00:29:12.710 "adrfam": "ipv4", 00:29:12.710 "trsvcid": "4420", 00:29:12.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:12.710 "hdgst": false, 00:29:12.710 "ddgst": false 00:29:12.710 }, 00:29:12.710 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme2", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme3", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme4", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme5", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme6", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme7", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme8", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme9", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 },{ 00:29:12.711 "params": { 00:29:12.711 "name": "Nvme10", 00:29:12.711 "trtype": "tcp", 00:29:12.711 "traddr": "10.0.0.2", 00:29:12.711 "adrfam": "ipv4", 00:29:12.711 "trsvcid": "4420", 00:29:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:12.711 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:12.711 "hdgst": false, 00:29:12.711 "ddgst": false 00:29:12.711 }, 00:29:12.711 "method": "bdev_nvme_attach_controller" 00:29:12.711 }' 00:29:12.970 [2024-12-16 22:35:02.421137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.970 [2024-12-16 22:35:02.443551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.346 Running I/O for 10 seconds... 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:14.604 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:14.863 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:14.863 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:14.863 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:14.863 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:14.863 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.863 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 428236 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428236 ']' 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428236 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428236 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428236' 00:29:15.139 killing process with pid 428236 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 428236 00:29:15.139 22:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 428236 00:29:15.139 [2024-12-16 22:35:04.667768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.667995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.139 [2024-12-16 22:35:04.668205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.668211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.668216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.668222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.668228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.668234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e6b40 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.669736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145e2d0 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.670780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7030 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.670791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7030 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.670798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7030 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.670804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7030 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.140 [2024-12-16 22:35:04.672152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.672463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7500 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.141 [2024-12-16 22:35:04.673770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.673923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e79f0 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.142 [2024-12-16 22:35:04.674925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.674931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.674937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.674942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7d70 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.675994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.676206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8240 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.677156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.677170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.677177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.677185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.143 [2024-12-16 22:35:04.677196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.677547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8710 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8c00 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.144 [2024-12-16 22:35:04.678726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.678732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.682615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134bf0 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.682728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11729a0 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.682809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177a40 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.682902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f680 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.682983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.682991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.682998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42610 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.683060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e200 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.683139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12c40 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.683225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d0b0 is same with the state(6) to be set 00:29:15.145 [2024-12-16 22:35:04.683301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.145 [2024-12-16 22:35:04.683351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.145 [2024-12-16 22:35:04.683357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15420 is same with the state(6) to be set 00:29:15.146 [2024-12-16 22:35:04.684064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.146 [2024-12-16 22:35:04.684691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.146 [2024-12-16 22:35:04.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.684986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.684992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.147 [2024-12-16 22:35:04.685482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.147 [2024-12-16 22:35:04.685490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.685815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.685821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.686702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.686907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e90d0 is same with the state(6) to be set 00:29:15.148 [2024-12-16 22:35:04.695023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.148 [2024-12-16 22:35:04.695037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.148 [2024-12-16 22:35:04.695049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.695545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.149 [2024-12-16 22:35:04.695553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.697106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1134bf0 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11729a0 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177a40 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.149 [2024-12-16 22:35:04.697219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.697229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.149 [2024-12-16 22:35:04.697242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.697251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.149 [2024-12-16 22:35:04.697259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.697269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.149 [2024-12-16 22:35:04.697276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.149 [2024-12-16 22:35:04.697284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11832b0 is same with the state(6) to be set 00:29:15.149 [2024-12-16 22:35:04.697302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f680 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42610 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e200 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12c40 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1d0b0 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.697389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd15420 (9): Bad file descriptor 00:29:15.149 [2024-12-16 22:35:04.698964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:15.149 [2024-12-16 22:35:04.699685] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.149 [2024-12-16 22:35:04.699718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:15.149 [2024-12-16 22:35:04.700095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.149 [2024-12-16 22:35:04.700114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1d0b0 with addr=10.0.0.2, port=4420 00:29:15.149 [2024-12-16 22:35:04.700125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d0b0 is same with the state(6) to be set 00:29:15.149 [2024-12-16 22:35:04.700174] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.149 [2024-12-16 22:35:04.700502] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.149 [2024-12-16 22:35:04.700552] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.149 [2024-12-16 22:35:04.700601] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.149 [2024-12-16 22:35:04.700648] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.150 [2024-12-16 22:35:04.700696] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.150 [2024-12-16 22:35:04.701041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.150 [2024-12-16 22:35:04.701061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1177a40 with addr=10.0.0.2, port=4420 00:29:15.150 [2024-12-16 22:35:04.701072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177a40 is same with the state(6) to be set 00:29:15.150 [2024-12-16 22:35:04.701085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1d0b0 (9): Bad file descriptor 00:29:15.150 [2024-12-16 22:35:04.701216] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:15.150 [2024-12-16 22:35:04.701241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177a40 (9): Bad file descriptor 00:29:15.150 [2024-12-16 22:35:04.701253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:15.150 [2024-12-16 22:35:04.701265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:15.150 [2024-12-16 22:35:04.701275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:15.150 [2024-12-16 22:35:04.701285] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:15.150 [2024-12-16 22:35:04.701359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:15.150 [2024-12-16 22:35:04.701369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:15.150 [2024-12-16 22:35:04.701377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:15.150 [2024-12-16 22:35:04.701384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:15.150 [2024-12-16 22:35:04.707127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11832b0 (9): Bad file descriptor 00:29:15.150 [2024-12-16 22:35:04.707314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.707982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.707994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.708003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.150 [2024-12-16 22:35:04.708014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.150 [2024-12-16 22:35:04.708023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.708645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.708656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17c20 is same with the state(6) to be set 00:29:15.151 [2024-12-16 22:35:04.710015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.151 [2024-12-16 22:35:04.710225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.151 [2024-12-16 22:35:04.710234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.152 [2024-12-16 22:35:04.710976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.152 [2024-12-16 22:35:04.710985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.710996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.711359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.711370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13950f0 is same with the state(6) to be set 00:29:15.153 [2024-12-16 22:35:04.712667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.712987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.712993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.713001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.153 [2024-12-16 22:35:04.713008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.153 [2024-12-16 22:35:04.713015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.154 [2024-12-16 22:35:04.713639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.154 [2024-12-16 22:35:04.713645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.713653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.713660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.713667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396400 is same with the state(6) to be set 00:29:15.155 [2024-12-16 22:35:04.714648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.714991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.714997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.155 [2024-12-16 22:35:04.715238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.155 [2024-12-16 22:35:04.715246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.715605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.715612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397710 is same with the state(6) to be set 00:29:15.156 [2024-12-16 22:35:04.716622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.156 [2024-12-16 22:35:04.716775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.156 [2024-12-16 22:35:04.716783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.716987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.716993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.157 [2024-12-16 22:35:04.717372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.157 [2024-12-16 22:35:04.717379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.717575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.717582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120380 is same with the state(6) to be set 00:29:15.158 [2024-12-16 22:35:04.718558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.158 [2024-12-16 22:35:04.718958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.158 [2024-12-16 22:35:04.718967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.718973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.718987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.718995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.719504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.719511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d42f0 is same with the state(6) to be set 00:29:15.159 [2024-12-16 22:35:04.720508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.720522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.720533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.720540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.720549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.720556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.159 [2024-12-16 22:35:04.720564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.159 [2024-12-16 22:35:04.720571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.720988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.720996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.160 [2024-12-16 22:35:04.721114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.160 [2024-12-16 22:35:04.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.161 [2024-12-16 22:35:04.721459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.161 [2024-12-16 22:35:04.721466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d7be0 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.722424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722530] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:15.161 [2024-12-16 22:35:04.722548] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:15.161 [2024-12-16 22:35:04.722558] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:15.161 [2024-12-16 22:35:04.722645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.722954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.722969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd15420 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.722977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15420 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.723123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.723132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e200 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.723139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e200 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.723275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.723286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12c40 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.723293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12c40 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.723362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.723375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1134bf0 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.723383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134bf0 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.724900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.724917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:15.161 [2024-12-16 22:35:04.725128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.725140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42610 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.725148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42610 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.725296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.725306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0f680 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.725313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f680 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.725535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.161 [2024-12-16 22:35:04.725545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11729a0 with addr=10.0.0.2, port=4420 00:29:15.161 [2024-12-16 22:35:04.725552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11729a0 is same with the state(6) to be set 00:29:15.161 [2024-12-16 22:35:04.725563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd15420 (9): Bad file descriptor 00:29:15.161 [2024-12-16 22:35:04.725572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e200 (9): Bad file descriptor 00:29:15.161 [2024-12-16 22:35:04.725581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12c40 (9): Bad file descriptor 00:29:15.161 [2024-12-16 22:35:04.725589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1134bf0 (9): Bad file descriptor 00:29:15.162 [2024-12-16 22:35:04.725662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.725989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.725996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.162 [2024-12-16 22:35:04.726264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.162 [2024-12-16 22:35:04.726272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.163 [2024-12-16 22:35:04.726623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.163 [2024-12-16 22:35:04.726630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d68b0 is same with the state(6) to be set 00:29:15.163 task offset: 24576 on job bdev=Nvme8n1 fails 00:29:15.163 00:29:15.163 Latency(us) 00:29:15.163 [2024-12-16T21:35:04.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme1n1 ended in about 0.82 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme1n1 : 0.82 233.40 14.59 77.80 0.00 203439.79 16103.13 212711.13 00:29:15.163 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme2n1 ended in about 0.81 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme2n1 : 0.81 236.63 14.79 78.88 0.00 196792.20 13918.60 215707.06 00:29:15.163 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme3n1 ended in about 0.83 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme3n1 : 0.83 232.64 14.54 77.55 0.00 196366.87 14667.58 214708.42 00:29:15.163 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme4n1 ended in about 0.83 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme4n1 : 0.83 232.02 14.50 77.34 0.00 193084.95 24591.60 199728.76 00:29:15.163 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme5n1 ended in about 0.83 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme5n1 : 0.83 154.32 9.64 77.16 0.00 253016.75 17975.59 236678.58 00:29:15.163 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme6n1 ended in about 0.83 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme6n1 : 0.83 153.96 9.62 76.98 0.00 248564.78 16727.28 220700.28 00:29:15.163 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme7n1 ended in about 0.83 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme7n1 : 0.83 235.20 14.70 76.80 0.00 180137.82 22843.98 206719.27 00:29:15.163 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme8n1 ended in about 0.81 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme8n1 : 0.81 237.11 14.82 79.04 0.00 173212.77 13606.52 211712.49 00:29:15.163 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme9n1 ended in about 0.84 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme9n1 : 0.84 152.30 9.52 76.15 0.00 236138.95 18474.91 224694.86 00:29:15.163 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:15.163 Job: Nvme10n1 ended in about 0.84 seconds with error 00:29:15.163 Verification LBA range: start 0x0 length 0x400 00:29:15.163 Nvme10n1 : 0.84 153.24 9.58 76.62 0.00 229251.82 24092.28 235679.94 00:29:15.163 [2024-12-16T21:35:04.864Z] =================================================================================================================== 00:29:15.163 [2024-12-16T21:35:04.864Z] Total : 2020.82 126.30 774.31 0.00 207537.28 13606.52 236678.58 00:29:15.163 [2024-12-16 22:35:04.758400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:15.163 [2024-12-16 22:35:04.758455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:15.163 [2024-12-16 22:35:04.758752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.163 [2024-12-16 22:35:04.758770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1d0b0 with addr=10.0.0.2, port=4420 00:29:15.163 [2024-12-16 22:35:04.758781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d0b0 is same with the state(6) to be set 00:29:15.163 [2024-12-16 22:35:04.758986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.163 [2024-12-16 22:35:04.758997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1177a40 with addr=10.0.0.2, port=4420 00:29:15.163 [2024-12-16 22:35:04.759004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177a40 is same with the state(6) to be set 00:29:15.163 [2024-12-16 22:35:04.759017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42610 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.759028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f680 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.759038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11729a0 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.759046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759128] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.759543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11832b0 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.759551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11832b0 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.759562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1d0b0 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.759571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177a40 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.759579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759598] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.759634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.759640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.759648] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.759708] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:15.164 [2024-12-16 22:35:04.759719] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:15.164 [2024-12-16 22:35:04.759994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11832b0 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.760004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.760010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.760016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.760022] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.760029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.760034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.760041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.760046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.760597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:15.164 [2024-12-16 22:35:04.760707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.760713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.760720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.760726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.760983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.760998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1134bf0 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.761007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134bf0 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.761203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.761214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd12c40 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.761221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12c40 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.761366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.761376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e200 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.761383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e200 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.761514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.761524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd15420 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.761531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15420 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.761695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.761705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11729a0 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.761711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11729a0 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.761916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0f680 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.761933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f680 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.762124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.164 [2024-12-16 22:35:04.762134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42610 with addr=10.0.0.2, port=4420 00:29:15.164 [2024-12-16 22:35:04.762141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42610 is same with the state(6) to be set 00:29:15.164 [2024-12-16 22:35:04.762171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1134bf0 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd12c40 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e200 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd15420 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11729a0 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0f680 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42610 (9): Bad file descriptor 00:29:15.164 [2024-12-16 22:35:04.762251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.762258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.762264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:15.164 [2024-12-16 22:35:04.762270] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:15.164 [2024-12-16 22:35:04.762277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:15.164 [2024-12-16 22:35:04.762283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:15.164 [2024-12-16 22:35:04.762289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:15.165 [2024-12-16 22:35:04.762297] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:15.165 [2024-12-16 22:35:04.762303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:15.165 [2024-12-16 22:35:04.762309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:15.165 [2024-12-16 22:35:04.762315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:15.165 [2024-12-16 22:35:04.762320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:15.165 [2024-12-16 22:35:04.762326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:15.165 [2024-12-16 22:35:04.762332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:15.165 [2024-12-16 22:35:04.762338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:15.165 [2024-12-16 22:35:04.762344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:15.165 [2024-12-16 22:35:04.762350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:15.165 [2024-12-16 22:35:04.762355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:15.165 [2024-12-16 22:35:04.762362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:15.165 [2024-12-16 22:35:04.762367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:15.165 [2024-12-16 22:35:04.762374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:15.165 [2024-12-16 22:35:04.762379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:15.165 [2024-12-16 22:35:04.762385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:15.165 [2024-12-16 22:35:04.762392] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:15.165 [2024-12-16 22:35:04.762398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:15.165 [2024-12-16 22:35:04.762404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:15.165 [2024-12-16 22:35:04.762409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:15.165 [2024-12-16 22:35:04.762415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:15.424 22:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 428496 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 428496 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 428496 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.802 rmmod nvme_tcp 00:29:16.802 rmmod nvme_fabrics 00:29:16.802 rmmod nvme_keyring 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 428236 ']' 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 428236 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428236 ']' 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428236 00:29:16.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (428236) - No such process 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 428236 is not found' 00:29:16.802 Process with pid 428236 is not found 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.802 22:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.708 00:29:18.708 real 0m7.079s 00:29:18.708 user 0m16.182s 00:29:18.708 sys 0m1.279s 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:18.708 ************************************ 00:29:18.708 END TEST nvmf_shutdown_tc3 00:29:18.708 ************************************ 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.708 ************************************ 00:29:18.708 START TEST nvmf_shutdown_tc4 00:29:18.708 ************************************ 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.708 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:18.709 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:18.709 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:18.709 Found net devices under 0000:af:00.0: cvl_0_0 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:18.709 Found net devices under 0000:af:00.1: cvl_0_1 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.709 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:29:18.968 00:29:18.968 --- 10.0.0.2 ping statistics --- 00:29:18.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.968 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:18.968 00:29:18.968 --- 10.0.0.1 ping statistics --- 00:29:18.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.968 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=429523 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 429523 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 429523 ']' 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.968 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:18.968 [2024-12-16 22:35:08.665835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:18.968 [2024-12-16 22:35:08.665879] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.226 [2024-12-16 22:35:08.743096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.226 [2024-12-16 22:35:08.765434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.226 [2024-12-16 22:35:08.765472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.226 [2024-12-16 22:35:08.765480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.226 [2024-12-16 22:35:08.765489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.226 [2024-12-16 22:35:08.765494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.226 [2024-12-16 22:35:08.766984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.226 [2024-12-16 22:35:08.767095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.226 [2024-12-16 22:35:08.767224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.226 [2024-12-16 22:35:08.767224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.226 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.227 [2024-12-16 22:35:08.898389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.227 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.485 22:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.485 Malloc1 00:29:19.485 [2024-12-16 22:35:09.004783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.485 Malloc2 00:29:19.485 Malloc3 00:29:19.485 Malloc4 00:29:19.485 Malloc5 00:29:19.744 Malloc6 00:29:19.744 Malloc7 00:29:19.744 Malloc8 00:29:19.744 Malloc9 00:29:19.744 Malloc10 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=429784 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:19.744 22:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:20.003 [2024-12-16 22:35:09.514965] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 429523 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429523 ']' 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429523 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429523 00:29:25.281 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.282 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.282 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429523' 00:29:25.282 killing process with pid 429523 00:29:25.282 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 429523 00:29:25.282 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 429523 00:29:25.282 [2024-12-16 22:35:14.513819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1bf70 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.513869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1bf70 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.514749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c460 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 [2024-12-16 22:35:14.515934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c930 is same with the state(6) to be set 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 [2024-12-16 22:35:14.517847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 [2024-12-16 22:35:14.518780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 Write completed with error (sct=0, sc=8) 00:29:25.282 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.519260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.519282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.519290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 [2024-12-16 22:35:14.519297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.519303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.519310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 [2024-12-16 22:35:14.519317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1abf0 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.519766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.520475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e095a0 is same with the state(6) to be set 00:29:25.283 [2024-12-16 22:35:14.520487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e095a0 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.520494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e095a0 is same with the state(6) to be set 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.520500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e095a0 is same with the state(6) to be set 00:29:25.283 [2024-12-16 22:35:14.520507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e095a0 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.520512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e095a0 is same with starting I/O failed: -6 00:29:25.283 the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.520800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09a90 is same with the state(6) to be set 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.520820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09a90 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.520828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09a90 is same with the state(6) to be set 00:29:25.283 starting I/O failed: -6 00:29:25.283 [2024-12-16 22:35:14.520835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09a90 is same with the state(6) to be set 00:29:25.283 [2024-12-16 22:35:14.520842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09a90 is same with the state(6) to be set 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 [2024-12-16 22:35:14.520848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09a90 is same with the state(6) to be set 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.283 starting I/O failed: -6 00:29:25.283 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 [2024-12-16 22:35:14.521156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09f60 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 [2024-12-16 22:35:14.521309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.284 NVMe io qpair process completion error 00:29:25.284 [2024-12-16 22:35:14.521634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.521695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 starting I/O failed: -6 00:29:25.284 [2024-12-16 22:35:14.521702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.521710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.521716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1e140 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with starting I/O failed: -6 00:29:25.284 the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.284 [2024-12-16 22:35:14.522308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a900 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with Write completed with error (sct=0, sc=8) 00:29:25.284 the state(6) to be set 00:29:25.284 starting I/O failed: -6 00:29:25.284 [2024-12-16 22:35:14.522613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with starting I/O failed: -6 00:29:25.284 the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 [2024-12-16 22:35:14.522640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with Write completed with error (sct=0, sc=8) 00:29:25.284 the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with the state(6) to be set 00:29:25.284 [2024-12-16 22:35:14.522663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0add0 is same with the state(6) to be set 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 starting I/O failed: -6 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.284 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 [2024-12-16 22:35:14.523150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b2a0 is same with the state(6) to be set 00:29:25.285 starting I/O failed: -6 00:29:25.285 [2024-12-16 22:35:14.523162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b2a0 is same with the state(6) to be set 00:29:25.285 [2024-12-16 22:35:14.523168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b2a0 is same with the state(6) to be set 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 [2024-12-16 22:35:14.523174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b2a0 is same with the state(6) to be set 00:29:25.285 [2024-12-16 22:35:14.523180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b2a0 is same with the state(6) to be set 00:29:25.285 [2024-12-16 22:35:14.523198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 [2024-12-16 22:35:14.523510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 starting I/O failed: -6 00:29:25.285 [2024-12-16 22:35:14.523524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 [2024-12-16 22:35:14.523531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with Write completed with error (sct=0, sc=8) 00:29:25.285 the state(6) to be set 00:29:25.285 starting I/O failed: -6 00:29:25.285 [2024-12-16 22:35:14.523541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 [2024-12-16 22:35:14.523547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 [2024-12-16 22:35:14.523553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 [2024-12-16 22:35:14.523559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 starting I/O failed: -6 00:29:25.285 [2024-12-16 22:35:14.523565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a430 is same with the state(6) to be set 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 [2024-12-16 22:35:14.524165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.285 starting I/O failed: -6 00:29:25.285 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 [2024-12-16 22:35:14.525860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.286 NVMe io qpair process completion error 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 [2024-12-16 22:35:14.526799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 [2024-12-16 22:35:14.527691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.286 starting I/O failed: -6 00:29:25.286 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 [2024-12-16 22:35:14.528675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 [2024-12-16 22:35:14.530181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.287 NVMe io qpair process completion error 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 starting I/O failed: -6 00:29:25.287 Write completed with error (sct=0, sc=8) 00:29:25.287 [2024-12-16 22:35:14.531092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.287 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 [2024-12-16 22:35:14.532010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 [2024-12-16 22:35:14.533016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.288 starting I/O failed: -6 00:29:25.288 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 [2024-12-16 22:35:14.535095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.289 NVMe io qpair process completion error 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 [2024-12-16 22:35:14.536083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 [2024-12-16 22:35:14.536987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.289 Write completed with error (sct=0, sc=8) 00:29:25.289 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 [2024-12-16 22:35:14.538013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 [2024-12-16 22:35:14.544367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.290 NVMe io qpair process completion error 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 [2024-12-16 22:35:14.545403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.290 Write completed with error (sct=0, sc=8) 00:29:25.290 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 [2024-12-16 22:35:14.546184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 [2024-12-16 22:35:14.547218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.291 Write completed with error (sct=0, sc=8) 00:29:25.291 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 [2024-12-16 22:35:14.549966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.292 NVMe io qpair process completion error 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 [2024-12-16 22:35:14.551615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 starting I/O failed: -6 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.292 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 [2024-12-16 22:35:14.552726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 [2024-12-16 22:35:14.554462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.293 NVMe io qpair process completion error 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 [2024-12-16 22:35:14.555563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 starting I/O failed: -6 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.293 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 [2024-12-16 22:35:14.556506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 [2024-12-16 22:35:14.557536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.294 [2024-12-16 22:35:14.563289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.294 NVMe io qpair process completion error 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 Write completed with error (sct=0, sc=8) 00:29:25.294 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 [2024-12-16 22:35:14.564261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 [2024-12-16 22:35:14.565181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.295 starting I/O failed: -6 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.295 Write completed with error (sct=0, sc=8) 00:29:25.295 starting I/O failed: -6 00:29:25.296 [2024-12-16 22:35:14.566250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 [2024-12-16 22:35:14.570754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.296 NVMe io qpair process completion error 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 [2024-12-16 22:35:14.571790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 starting I/O failed: -6 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.296 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 [2024-12-16 22:35:14.572676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 [2024-12-16 22:35:14.573764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 Write completed with error (sct=0, sc=8) 00:29:25.297 starting I/O failed: -6 00:29:25.297 [2024-12-16 22:35:14.576326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:25.297 NVMe io qpair process completion error 00:29:25.297 Initializing NVMe Controllers 00:29:25.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:25.297 Controller IO queue size 128, less than required. 00:29:25.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:25.297 Controller IO queue size 128, less than required. 00:29:25.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:25.297 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:25.298 Controller IO queue size 128, less than required. 00:29:25.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:25.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:25.298 Initialization complete. Launching workers. 00:29:25.298 ======================================================== 00:29:25.298 Latency(us) 00:29:25.298 Device Information : IOPS MiB/s Average min max 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2213.82 95.12 57822.52 719.44 107171.08 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2189.23 94.07 58480.98 825.29 106096.59 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2207.83 94.87 58005.58 929.38 104130.72 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2221.73 95.46 57712.93 795.65 102709.56 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2219.37 95.36 57796.38 1038.95 111192.41 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2233.91 95.99 57436.44 965.32 113032.32 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2214.24 95.14 58009.61 722.08 119537.74 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2210.82 95.00 58146.22 828.86 102821.39 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2172.77 93.36 58440.49 727.29 97020.41 00:29:25.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2181.53 93.74 58214.92 835.74 96169.53 00:29:25.298 ======================================================== 00:29:25.298 Total : 22065.26 948.12 58004.27 719.44 119537.74 00:29:25.298 00:29:25.298 [2024-12-16 22:35:14.579394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5566c0 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e6f00 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x555a00 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x555d30 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5556d0 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e1ff0 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x556060 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x556390 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x555070 is same with the state(6) to be set 00:29:25.298 [2024-12-16 22:35:14.579689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5553a0 is same with the state(6) to be set 00:29:25.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:25.298 22:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 429784 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 429784 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.232 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 429784 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.233 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.233 rmmod nvme_tcp 00:29:26.233 rmmod nvme_fabrics 00:29:26.491 rmmod nvme_keyring 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 429523 ']' 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 429523 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429523 ']' 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429523 00:29:26.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (429523) - No such process 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 429523 is not found' 00:29:26.491 Process with pid 429523 is not found 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.491 22:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.393 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.393 00:29:28.393 real 0m9.747s 00:29:28.393 user 0m24.656s 00:29:28.393 sys 0m5.296s 00:29:28.393 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.393 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:28.393 ************************************ 00:29:28.393 END TEST nvmf_shutdown_tc4 00:29:28.393 ************************************ 00:29:28.393 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:28.393 00:29:28.393 real 0m39.985s 00:29:28.393 user 1m36.871s 00:29:28.393 sys 0m13.937s 00:29:28.393 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.393 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:28.393 ************************************ 00:29:28.393 END TEST nvmf_shutdown 00:29:28.393 ************************************ 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:28.653 ************************************ 00:29:28.653 START TEST nvmf_nsid 00:29:28.653 ************************************ 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:28.653 * Looking for test storage... 00:29:28.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.653 --rc genhtml_branch_coverage=1 00:29:28.653 --rc genhtml_function_coverage=1 00:29:28.653 --rc genhtml_legend=1 00:29:28.653 --rc geninfo_all_blocks=1 00:29:28.653 --rc geninfo_unexecuted_blocks=1 00:29:28.653 00:29:28.653 ' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.653 --rc genhtml_branch_coverage=1 00:29:28.653 --rc genhtml_function_coverage=1 00:29:28.653 --rc genhtml_legend=1 00:29:28.653 --rc geninfo_all_blocks=1 00:29:28.653 --rc geninfo_unexecuted_blocks=1 00:29:28.653 00:29:28.653 ' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.653 --rc genhtml_branch_coverage=1 00:29:28.653 --rc genhtml_function_coverage=1 00:29:28.653 --rc genhtml_legend=1 00:29:28.653 --rc geninfo_all_blocks=1 00:29:28.653 --rc geninfo_unexecuted_blocks=1 00:29:28.653 00:29:28.653 ' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.653 --rc genhtml_branch_coverage=1 00:29:28.653 --rc genhtml_function_coverage=1 00:29:28.653 --rc genhtml_legend=1 00:29:28.653 --rc geninfo_all_blocks=1 00:29:28.653 --rc geninfo_unexecuted_blocks=1 00:29:28.653 00:29:28.653 ' 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.653 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.913 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.914 22:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.490 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:35.491 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:35.491 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.491 22:35:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:35.491 Found net devices under 0000:af:00.0: cvl_0_0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:35.491 Found net devices under 0000:af:00.1: cvl_0_1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:35.491 00:29:35.491 --- 10.0.0.2 ping statistics --- 00:29:35.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.491 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:35.491 00:29:35.491 --- 10.0.0.1 ping statistics --- 00:29:35.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.491 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=434161 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 434161 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434161 ']' 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:35.491 [2024-12-16 22:35:24.334239] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:35.491 [2024-12-16 22:35:24.334285] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.491 [2024-12-16 22:35:24.410830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.491 [2024-12-16 22:35:24.433280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.491 [2024-12-16 22:35:24.433320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.491 [2024-12-16 22:35:24.433327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.491 [2024-12-16 22:35:24.433334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.491 [2024-12-16 22:35:24.433338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.491 [2024-12-16 22:35:24.433842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.491 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=434232 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=203dd457-8d33-473e-ae14-01f88e15fe3b 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=300118d0-403e-4eb6-b897-e01836967725 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a888e34d-434a-4c83-946e-9f6a2a042535 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:35.492 null0 00:29:35.492 null1 00:29:35.492 [2024-12-16 22:35:24.623681] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:35.492 [2024-12-16 22:35:24.623727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434232 ] 00:29:35.492 null2 00:29:35.492 [2024-12-16 22:35:24.629385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.492 [2024-12-16 22:35:24.653571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 434232 /var/tmp/tgt2.sock 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434232 ']' 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:35.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:35.492 [2024-12-16 22:35:24.695152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.492 [2024-12-16 22:35:24.717917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:35.492 22:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:35.750 [2024-12-16 22:35:25.215968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.750 [2024-12-16 22:35:25.232054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:35.750 nvme0n1 nvme0n2 00:29:35.750 nvme1n1 00:29:35.750 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:35.750 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:35.750 22:35:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:36.686 22:35:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 203dd457-8d33-473e-ae14-01f88e15fe3b 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=203dd4578d33473eae1401f88e15fe3b 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 203DD4578D33473EAE1401F88E15FE3B 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 203DD4578D33473EAE1401F88E15FE3B == \2\0\3\D\D\4\5\7\8\D\3\3\4\7\3\E\A\E\1\4\0\1\F\8\8\E\1\5\F\E\3\B ]] 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 300118d0-403e-4eb6-b897-e01836967725 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=300118d0403e4eb6b897e01836967725 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 300118D0403E4EB6B897E01836967725 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 300118D0403E4EB6B897E01836967725 == \3\0\0\1\1\8\D\0\4\0\3\E\4\E\B\6\B\8\9\7\E\0\1\8\3\6\9\6\7\7\2\5 ]] 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a888e34d-434a-4c83-946e-9f6a2a042535 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a888e34d434a4c83946e9f6a2a042535 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A888E34D434A4C83946E9F6A2A042535 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A888E34D434A4C83946E9F6A2A042535 == \A\8\8\8\E\3\4\D\4\3\4\A\4\C\8\3\9\4\6\E\9\F\6\A\2\A\0\4\2\5\3\5 ]] 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 434232 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434232 ']' 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434232 00:29:38.063 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434232 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434232' 00:29:38.325 killing process with pid 434232 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434232 00:29:38.325 22:35:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434232 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.584 rmmod nvme_tcp 00:29:38.584 rmmod nvme_fabrics 00:29:38.584 rmmod nvme_keyring 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 434161 ']' 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 434161 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434161 ']' 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434161 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434161 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434161' 00:29:38.584 killing process with pid 434161 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434161 00:29:38.584 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434161 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.843 22:35:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.748 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.748 00:29:40.748 real 0m12.278s 00:29:40.748 user 0m9.511s 00:29:40.748 sys 0m5.483s 00:29:40.748 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.748 22:35:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:40.748 ************************************ 00:29:40.748 END TEST nvmf_nsid 00:29:40.748 ************************************ 00:29:41.007 22:35:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:41.007 00:29:41.007 real 18m35.347s 00:29:41.007 user 49m15.504s 00:29:41.007 sys 4m34.917s 00:29:41.007 22:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.007 22:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:41.007 ************************************ 00:29:41.007 END TEST nvmf_target_extra 00:29:41.007 ************************************ 00:29:41.007 22:35:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:41.007 22:35:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.007 22:35:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.007 22:35:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.007 ************************************ 00:29:41.007 START TEST nvmf_host 00:29:41.007 ************************************ 00:29:41.007 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:41.007 * Looking for test storage... 00:29:41.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:41.007 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.007 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.007 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.266 --rc genhtml_branch_coverage=1 00:29:41.266 --rc genhtml_function_coverage=1 00:29:41.266 --rc genhtml_legend=1 00:29:41.266 --rc geninfo_all_blocks=1 00:29:41.266 --rc geninfo_unexecuted_blocks=1 00:29:41.266 00:29:41.266 ' 00:29:41.266 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.267 --rc genhtml_branch_coverage=1 00:29:41.267 --rc genhtml_function_coverage=1 00:29:41.267 --rc genhtml_legend=1 00:29:41.267 --rc geninfo_all_blocks=1 00:29:41.267 --rc geninfo_unexecuted_blocks=1 00:29:41.267 00:29:41.267 ' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.267 --rc genhtml_branch_coverage=1 00:29:41.267 --rc genhtml_function_coverage=1 00:29:41.267 --rc genhtml_legend=1 00:29:41.267 --rc geninfo_all_blocks=1 00:29:41.267 --rc geninfo_unexecuted_blocks=1 00:29:41.267 00:29:41.267 ' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.267 --rc genhtml_branch_coverage=1 00:29:41.267 --rc genhtml_function_coverage=1 00:29:41.267 --rc genhtml_legend=1 00:29:41.267 --rc geninfo_all_blocks=1 00:29:41.267 --rc geninfo_unexecuted_blocks=1 00:29:41.267 00:29:41.267 ' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.267 ************************************ 00:29:41.267 START TEST nvmf_multicontroller 00:29:41.267 ************************************ 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:41.267 * Looking for test storage... 00:29:41.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.267 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.527 --rc genhtml_branch_coverage=1 00:29:41.527 --rc genhtml_function_coverage=1 00:29:41.527 --rc genhtml_legend=1 00:29:41.527 --rc geninfo_all_blocks=1 00:29:41.527 --rc geninfo_unexecuted_blocks=1 00:29:41.527 00:29:41.527 ' 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.527 --rc genhtml_branch_coverage=1 00:29:41.527 --rc genhtml_function_coverage=1 00:29:41.527 --rc genhtml_legend=1 00:29:41.527 --rc geninfo_all_blocks=1 00:29:41.527 --rc geninfo_unexecuted_blocks=1 00:29:41.527 00:29:41.527 ' 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.527 --rc genhtml_branch_coverage=1 00:29:41.527 --rc genhtml_function_coverage=1 00:29:41.527 --rc genhtml_legend=1 00:29:41.527 --rc geninfo_all_blocks=1 00:29:41.527 --rc geninfo_unexecuted_blocks=1 00:29:41.527 00:29:41.527 ' 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.527 --rc genhtml_branch_coverage=1 00:29:41.527 --rc genhtml_function_coverage=1 00:29:41.527 --rc genhtml_legend=1 00:29:41.527 --rc geninfo_all_blocks=1 00:29:41.527 --rc geninfo_unexecuted_blocks=1 00:29:41.527 00:29:41.527 ' 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.527 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.528 22:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.528 22:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.097 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:48.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:48.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:48.098 Found net devices under 0000:af:00.0: cvl_0_0 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:48.098 Found net devices under 0000:af:00.1: cvl_0_1 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:29:48.098 00:29:48.098 --- 10.0.0.2 ping statistics --- 00:29:48.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.098 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:29:48.098 00:29:48.098 --- 10.0.0.1 ping statistics --- 00:29:48.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.098 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.098 22:35:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=438415 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 438415 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438415 ']' 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.098 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.098 [2024-12-16 22:35:37.065082] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:48.098 [2024-12-16 22:35:37.065123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.098 [2024-12-16 22:35:37.143841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.098 [2024-12-16 22:35:37.166308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.098 [2024-12-16 22:35:37.166344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.098 [2024-12-16 22:35:37.166351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.098 [2024-12-16 22:35:37.166357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.098 [2024-12-16 22:35:37.166363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.099 [2024-12-16 22:35:37.167628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.099 [2024-12-16 22:35:37.167725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.099 [2024-12-16 22:35:37.167725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 [2024-12-16 22:35:37.298352] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 Malloc0 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 [2024-12-16 22:35:37.357534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 [2024-12-16 22:35:37.369495] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 Malloc1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=438479 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 438479 /var/tmp/bdevperf.sock 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438479 ']' 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 NVMe0n1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.099 1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.099 request: 00:29:48.099 { 00:29:48.099 "name": "NVMe0", 00:29:48.099 "trtype": "tcp", 00:29:48.099 "traddr": "10.0.0.2", 00:29:48.099 "adrfam": "ipv4", 00:29:48.099 "trsvcid": "4420", 00:29:48.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.099 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:48.099 "hostaddr": "10.0.0.1", 00:29:48.099 "prchk_reftag": false, 00:29:48.099 "prchk_guard": false, 00:29:48.099 "hdgst": false, 00:29:48.099 "ddgst": false, 00:29:48.099 "allow_unrecognized_csi": false, 00:29:48.099 "method": "bdev_nvme_attach_controller", 00:29:48.099 "req_id": 1 00:29:48.099 } 00:29:48.099 Got JSON-RPC error response 00:29:48.099 response: 00:29:48.099 { 00:29:48.099 "code": -114, 00:29:48.099 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:48.099 } 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.099 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.100 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.100 request: 00:29:48.100 { 00:29:48.100 "name": "NVMe0", 00:29:48.100 "trtype": "tcp", 00:29:48.100 "traddr": "10.0.0.2", 00:29:48.100 "adrfam": "ipv4", 00:29:48.100 "trsvcid": "4420", 00:29:48.100 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:48.100 "hostaddr": "10.0.0.1", 00:29:48.100 "prchk_reftag": false, 00:29:48.100 "prchk_guard": false, 00:29:48.100 "hdgst": false, 00:29:48.359 "ddgst": false, 00:29:48.359 "allow_unrecognized_csi": false, 00:29:48.359 "method": "bdev_nvme_attach_controller", 00:29:48.359 "req_id": 1 00:29:48.359 } 00:29:48.359 Got JSON-RPC error response 00:29:48.359 response: 00:29:48.359 { 00:29:48.359 "code": -114, 00:29:48.359 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:48.359 } 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 request: 00:29:48.359 { 00:29:48.359 "name": "NVMe0", 00:29:48.359 "trtype": "tcp", 00:29:48.359 "traddr": "10.0.0.2", 00:29:48.359 "adrfam": "ipv4", 00:29:48.359 "trsvcid": "4420", 00:29:48.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.359 "hostaddr": "10.0.0.1", 00:29:48.359 "prchk_reftag": false, 00:29:48.359 "prchk_guard": false, 00:29:48.359 "hdgst": false, 00:29:48.359 "ddgst": false, 00:29:48.359 "multipath": "disable", 00:29:48.359 "allow_unrecognized_csi": false, 00:29:48.359 "method": "bdev_nvme_attach_controller", 00:29:48.359 "req_id": 1 00:29:48.359 } 00:29:48.359 Got JSON-RPC error response 00:29:48.359 response: 00:29:48.359 { 00:29:48.359 "code": -114, 00:29:48.359 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:48.359 } 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 request: 00:29:48.359 { 00:29:48.359 "name": "NVMe0", 00:29:48.359 "trtype": "tcp", 00:29:48.359 "traddr": "10.0.0.2", 00:29:48.359 "adrfam": "ipv4", 00:29:48.359 "trsvcid": "4420", 00:29:48.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.359 "hostaddr": "10.0.0.1", 00:29:48.359 "prchk_reftag": false, 00:29:48.359 "prchk_guard": false, 00:29:48.359 "hdgst": false, 00:29:48.359 "ddgst": false, 00:29:48.359 "multipath": "failover", 00:29:48.359 "allow_unrecognized_csi": false, 00:29:48.359 "method": "bdev_nvme_attach_controller", 00:29:48.359 "req_id": 1 00:29:48.359 } 00:29:48.359 Got JSON-RPC error response 00:29:48.359 response: 00:29:48.359 { 00:29:48.359 "code": -114, 00:29:48.359 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:48.359 } 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 22:35:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 NVMe0n1 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.618 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:48.618 22:35:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.994 { 00:29:49.994 "results": [ 00:29:49.994 { 00:29:49.994 "job": "NVMe0n1", 00:29:49.994 "core_mask": "0x1", 00:29:49.994 "workload": "write", 00:29:49.994 "status": "finished", 00:29:49.994 "queue_depth": 128, 00:29:49.994 "io_size": 4096, 00:29:49.994 "runtime": 1.003271, 00:29:49.994 "iops": 25145.748257449883, 00:29:49.994 "mibps": 98.2255791306636, 00:29:49.994 "io_failed": 0, 00:29:49.994 "io_timeout": 0, 00:29:49.994 "avg_latency_us": 5083.263042273514, 00:29:49.994 "min_latency_us": 1638.4, 00:29:49.994 "max_latency_us": 8925.379047619048 00:29:49.994 } 00:29:49.994 ], 00:29:49.994 "core_count": 1 00:29:49.994 } 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 438479 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438479 ']' 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438479 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438479 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438479' 00:29:49.994 killing process with pid 438479 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438479 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438479 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.994 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:49.995 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:49.995 [2024-12-16 22:35:37.474999] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:49.995 [2024-12-16 22:35:37.475045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438479 ] 00:29:49.995 [2024-12-16 22:35:37.550340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.995 [2024-12-16 22:35:37.572536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.995 [2024-12-16 22:35:38.276880] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 52786c20-583b-40a0-a0fd-020a38de532a already exists 00:29:49.995 [2024-12-16 22:35:38.276906] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:52786c20-583b-40a0-a0fd-020a38de532a alias for bdev NVMe1n1 00:29:49.995 [2024-12-16 22:35:38.276914] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:49.995 Running I/O for 1 seconds... 00:29:49.995 25100.00 IOPS, 98.05 MiB/s 00:29:49.995 Latency(us) 00:29:49.995 [2024-12-16T21:35:39.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.995 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:49.995 NVMe0n1 : 1.00 25145.75 98.23 0.00 0.00 5083.26 1638.40 8925.38 00:29:49.995 [2024-12-16T21:35:39.696Z] =================================================================================================================== 00:29:49.995 [2024-12-16T21:35:39.696Z] Total : 25145.75 98.23 0.00 0.00 5083.26 1638.40 8925.38 00:29:49.995 Received shutdown signal, test time was about 1.000000 seconds 00:29:49.995 00:29:49.995 Latency(us) 00:29:49.995 [2024-12-16T21:35:39.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.995 [2024-12-16T21:35:39.696Z] =================================================================================================================== 00:29:49.995 [2024-12-16T21:35:39.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.995 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.995 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.995 rmmod nvme_tcp 00:29:50.254 rmmod nvme_fabrics 00:29:50.254 rmmod nvme_keyring 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 438415 ']' 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 438415 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438415 ']' 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438415 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438415 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438415' 00:29:50.254 killing process with pid 438415 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438415 00:29:50.254 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438415 00:29:50.521 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.522 22:35:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.427 00:29:52.427 real 0m11.261s 00:29:52.427 user 0m12.501s 00:29:52.427 sys 0m5.170s 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:52.427 ************************************ 00:29:52.427 END TEST nvmf_multicontroller 00:29:52.427 ************************************ 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.427 ************************************ 00:29:52.427 START TEST nvmf_aer 00:29:52.427 ************************************ 00:29:52.427 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:52.687 * Looking for test storage... 00:29:52.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.687 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:52.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.687 --rc genhtml_branch_coverage=1 00:29:52.687 --rc genhtml_function_coverage=1 00:29:52.687 --rc genhtml_legend=1 00:29:52.687 --rc geninfo_all_blocks=1 00:29:52.687 --rc geninfo_unexecuted_blocks=1 00:29:52.687 00:29:52.687 ' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.688 --rc genhtml_branch_coverage=1 00:29:52.688 --rc genhtml_function_coverage=1 00:29:52.688 --rc genhtml_legend=1 00:29:52.688 --rc geninfo_all_blocks=1 00:29:52.688 --rc geninfo_unexecuted_blocks=1 00:29:52.688 00:29:52.688 ' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.688 --rc genhtml_branch_coverage=1 00:29:52.688 --rc genhtml_function_coverage=1 00:29:52.688 --rc genhtml_legend=1 00:29:52.688 --rc geninfo_all_blocks=1 00:29:52.688 --rc geninfo_unexecuted_blocks=1 00:29:52.688 00:29:52.688 ' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.688 --rc genhtml_branch_coverage=1 00:29:52.688 --rc genhtml_function_coverage=1 00:29:52.688 --rc genhtml_legend=1 00:29:52.688 --rc geninfo_all_blocks=1 00:29:52.688 --rc geninfo_unexecuted_blocks=1 00:29:52.688 00:29:52.688 ' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.688 22:35:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.255 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:59.256 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:59.256 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:59.256 Found net devices under 0000:af:00.0: cvl_0_0 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:59.256 Found net devices under 0000:af:00.1: cvl_0_1 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.256 22:35:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:29:59.256 00:29:59.256 --- 10.0.0.2 ping statistics --- 00:29:59.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.256 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:29:59.256 00:29:59.256 --- 10.0.0.1 ping statistics --- 00:29:59.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.256 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=442361 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 442361 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 442361 ']' 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.256 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.256 [2024-12-16 22:35:48.249882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:59.256 [2024-12-16 22:35:48.249923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.256 [2024-12-16 22:35:48.324533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.256 [2024-12-16 22:35:48.347446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.256 [2024-12-16 22:35:48.347483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.256 [2024-12-16 22:35:48.347489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.256 [2024-12-16 22:35:48.347495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.256 [2024-12-16 22:35:48.347499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.256 [2024-12-16 22:35:48.348959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.256 [2024-12-16 22:35:48.349068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.257 [2024-12-16 22:35:48.349173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.257 [2024-12-16 22:35:48.349174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 [2024-12-16 22:35:48.480770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 Malloc0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 [2024-12-16 22:35:48.546288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 [ 00:29:59.257 { 00:29:59.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:59.257 "subtype": "Discovery", 00:29:59.257 "listen_addresses": [], 00:29:59.257 "allow_any_host": true, 00:29:59.257 "hosts": [] 00:29:59.257 }, 00:29:59.257 { 00:29:59.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.257 "subtype": "NVMe", 00:29:59.257 "listen_addresses": [ 00:29:59.257 { 00:29:59.257 "trtype": "TCP", 00:29:59.257 "adrfam": "IPv4", 00:29:59.257 "traddr": "10.0.0.2", 00:29:59.257 "trsvcid": "4420" 00:29:59.257 } 00:29:59.257 ], 00:29:59.257 "allow_any_host": true, 00:29:59.257 "hosts": [], 00:29:59.257 "serial_number": "SPDK00000000000001", 00:29:59.257 "model_number": "SPDK bdev Controller", 00:29:59.257 "max_namespaces": 2, 00:29:59.257 "min_cntlid": 1, 00:29:59.257 "max_cntlid": 65519, 00:29:59.257 "namespaces": [ 00:29:59.257 { 00:29:59.257 "nsid": 1, 00:29:59.257 "bdev_name": "Malloc0", 00:29:59.257 "name": "Malloc0", 00:29:59.257 "nguid": "9075311E52F94DC1A1D09B4E168F46E2", 00:29:59.257 "uuid": "9075311e-52f9-4dc1-a1d0-9b4e168f46e2" 00:29:59.257 } 00:29:59.257 ] 00:29:59.257 } 00:29:59.257 ] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=442390 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 Malloc1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 Asynchronous Event Request test 00:29:59.257 Attaching to 10.0.0.2 00:29:59.257 Attached to 10.0.0.2 00:29:59.257 Registering asynchronous event callbacks... 00:29:59.257 Starting namespace attribute notice tests for all controllers... 00:29:59.257 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:59.257 aer_cb - Changed Namespace 00:29:59.257 Cleaning up... 00:29:59.257 [ 00:29:59.257 { 00:29:59.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:59.257 "subtype": "Discovery", 00:29:59.257 "listen_addresses": [], 00:29:59.257 "allow_any_host": true, 00:29:59.257 "hosts": [] 00:29:59.257 }, 00:29:59.257 { 00:29:59.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.257 "subtype": "NVMe", 00:29:59.257 "listen_addresses": [ 00:29:59.257 { 00:29:59.257 "trtype": "TCP", 00:29:59.257 "adrfam": "IPv4", 00:29:59.257 "traddr": "10.0.0.2", 00:29:59.257 "trsvcid": "4420" 00:29:59.257 } 00:29:59.257 ], 00:29:59.257 "allow_any_host": true, 00:29:59.257 "hosts": [], 00:29:59.257 "serial_number": "SPDK00000000000001", 00:29:59.257 "model_number": "SPDK bdev Controller", 00:29:59.257 "max_namespaces": 2, 00:29:59.257 "min_cntlid": 1, 00:29:59.257 "max_cntlid": 65519, 00:29:59.257 "namespaces": [ 00:29:59.257 { 00:29:59.257 "nsid": 1, 00:29:59.257 "bdev_name": "Malloc0", 00:29:59.257 "name": "Malloc0", 00:29:59.257 "nguid": "9075311E52F94DC1A1D09B4E168F46E2", 00:29:59.257 "uuid": "9075311e-52f9-4dc1-a1d0-9b4e168f46e2" 00:29:59.257 }, 00:29:59.257 { 00:29:59.257 "nsid": 2, 00:29:59.257 "bdev_name": "Malloc1", 00:29:59.257 "name": "Malloc1", 00:29:59.257 "nguid": "929C0C6DD44444288D78B449892FD82C", 00:29:59.257 "uuid": "929c0c6d-d444-4428-8d78-b449892fd82c" 00:29:59.257 } 00:29:59.257 ] 00:29:59.257 } 00:29:59.257 ] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 442390 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.257 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.257 rmmod nvme_tcp 00:29:59.258 rmmod nvme_fabrics 00:29:59.516 rmmod nvme_keyring 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 442361 ']' 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 442361 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 442361 ']' 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 442361 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.516 22:35:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442361 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442361' 00:29:59.516 killing process with pid 442361 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 442361 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 442361 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.516 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.517 22:35:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.052 22:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.052 00:30:02.052 real 0m9.156s 00:30:02.052 user 0m5.111s 00:30:02.052 sys 0m4.836s 00:30:02.052 22:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.052 22:35:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:02.052 ************************************ 00:30:02.052 END TEST nvmf_aer 00:30:02.053 ************************************ 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.053 ************************************ 00:30:02.053 START TEST nvmf_async_init 00:30:02.053 ************************************ 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:02.053 * Looking for test storage... 00:30:02.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:02.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.053 --rc genhtml_branch_coverage=1 00:30:02.053 --rc genhtml_function_coverage=1 00:30:02.053 --rc genhtml_legend=1 00:30:02.053 --rc geninfo_all_blocks=1 00:30:02.053 --rc geninfo_unexecuted_blocks=1 00:30:02.053 00:30:02.053 ' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:02.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.053 --rc genhtml_branch_coverage=1 00:30:02.053 --rc genhtml_function_coverage=1 00:30:02.053 --rc genhtml_legend=1 00:30:02.053 --rc geninfo_all_blocks=1 00:30:02.053 --rc geninfo_unexecuted_blocks=1 00:30:02.053 00:30:02.053 ' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:02.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.053 --rc genhtml_branch_coverage=1 00:30:02.053 --rc genhtml_function_coverage=1 00:30:02.053 --rc genhtml_legend=1 00:30:02.053 --rc geninfo_all_blocks=1 00:30:02.053 --rc geninfo_unexecuted_blocks=1 00:30:02.053 00:30:02.053 ' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:02.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.053 --rc genhtml_branch_coverage=1 00:30:02.053 --rc genhtml_function_coverage=1 00:30:02.053 --rc genhtml_legend=1 00:30:02.053 --rc geninfo_all_blocks=1 00:30:02.053 --rc geninfo_unexecuted_blocks=1 00:30:02.053 00:30:02.053 ' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.053 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:02.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b607c7d8d03749f0ad999b74c55bf0ef 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.054 22:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.621 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.621 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.621 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:08.622 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:08.622 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:08.622 Found net devices under 0000:af:00.0: cvl_0_0 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:08.622 Found net devices under 0000:af:00.1: cvl_0_1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:30:08.622 00:30:08.622 --- 10.0.0.2 ping statistics --- 00:30:08.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.622 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:08.622 00:30:08.622 --- 10.0.0.1 ping statistics --- 00:30:08.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.622 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.622 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=445853 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 445853 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 445853 ']' 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 [2024-12-16 22:35:57.468821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:08.623 [2024-12-16 22:35:57.468865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.623 [2024-12-16 22:35:57.542766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.623 [2024-12-16 22:35:57.564657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.623 [2024-12-16 22:35:57.564691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.623 [2024-12-16 22:35:57.564699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.623 [2024-12-16 22:35:57.564706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.623 [2024-12-16 22:35:57.564711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.623 [2024-12-16 22:35:57.565220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 [2024-12-16 22:35:57.695906] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 null0 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b607c7d8d03749f0ad999b74c55bf0ef 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 [2024-12-16 22:35:57.744323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 nvme0n1 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 [ 00:30:08.623 { 00:30:08.623 "name": "nvme0n1", 00:30:08.623 "aliases": [ 00:30:08.623 "b607c7d8-d037-49f0-ad99-9b74c55bf0ef" 00:30:08.623 ], 00:30:08.623 "product_name": "NVMe disk", 00:30:08.623 "block_size": 512, 00:30:08.623 "num_blocks": 2097152, 00:30:08.623 "uuid": "b607c7d8-d037-49f0-ad99-9b74c55bf0ef", 00:30:08.623 "numa_id": 1, 00:30:08.623 "assigned_rate_limits": { 00:30:08.623 "rw_ios_per_sec": 0, 00:30:08.623 "rw_mbytes_per_sec": 0, 00:30:08.623 "r_mbytes_per_sec": 0, 00:30:08.623 "w_mbytes_per_sec": 0 00:30:08.623 }, 00:30:08.623 "claimed": false, 00:30:08.623 "zoned": false, 00:30:08.623 "supported_io_types": { 00:30:08.623 "read": true, 00:30:08.623 "write": true, 00:30:08.623 "unmap": false, 00:30:08.623 "flush": true, 00:30:08.623 "reset": true, 00:30:08.623 "nvme_admin": true, 00:30:08.623 "nvme_io": true, 00:30:08.623 "nvme_io_md": false, 00:30:08.623 "write_zeroes": true, 00:30:08.623 "zcopy": false, 00:30:08.623 "get_zone_info": false, 00:30:08.623 "zone_management": false, 00:30:08.623 "zone_append": false, 00:30:08.623 "compare": true, 00:30:08.623 "compare_and_write": true, 00:30:08.623 "abort": true, 00:30:08.623 "seek_hole": false, 00:30:08.623 "seek_data": false, 00:30:08.623 "copy": true, 00:30:08.623 "nvme_iov_md": false 00:30:08.623 }, 00:30:08.623 "memory_domains": [ 00:30:08.623 { 00:30:08.623 "dma_device_id": "system", 00:30:08.623 "dma_device_type": 1 00:30:08.623 } 00:30:08.623 ], 00:30:08.623 "driver_specific": { 00:30:08.623 "nvme": [ 00:30:08.623 { 00:30:08.623 "trid": { 00:30:08.623 "trtype": "TCP", 00:30:08.623 "adrfam": "IPv4", 00:30:08.623 "traddr": "10.0.0.2", 00:30:08.623 "trsvcid": "4420", 00:30:08.623 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.623 }, 00:30:08.623 "ctrlr_data": { 00:30:08.623 "cntlid": 1, 00:30:08.623 "vendor_id": "0x8086", 00:30:08.623 "model_number": "SPDK bdev Controller", 00:30:08.623 "serial_number": "00000000000000000000", 00:30:08.623 "firmware_revision": "25.01", 00:30:08.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.623 "oacs": { 00:30:08.623 "security": 0, 00:30:08.623 "format": 0, 00:30:08.623 "firmware": 0, 00:30:08.623 "ns_manage": 0 00:30:08.623 }, 00:30:08.623 "multi_ctrlr": true, 00:30:08.623 "ana_reporting": false 00:30:08.623 }, 00:30:08.623 "vs": { 00:30:08.623 "nvme_version": "1.3" 00:30:08.623 }, 00:30:08.623 "ns_data": { 00:30:08.623 "id": 1, 00:30:08.623 "can_share": true 00:30:08.623 } 00:30:08.623 } 00:30:08.623 ], 00:30:08.623 "mp_policy": "active_passive" 00:30:08.623 } 00:30:08.623 } 00:30:08.623 ] 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 [2024-12-16 22:35:58.004686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:08.623 [2024-12-16 22:35:58.004739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7230 (9): Bad file descriptor 00:30:08.623 [2024-12-16 22:35:58.136279] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.623 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.623 [ 00:30:08.623 { 00:30:08.623 "name": "nvme0n1", 00:30:08.623 "aliases": [ 00:30:08.623 "b607c7d8-d037-49f0-ad99-9b74c55bf0ef" 00:30:08.623 ], 00:30:08.623 "product_name": "NVMe disk", 00:30:08.623 "block_size": 512, 00:30:08.623 "num_blocks": 2097152, 00:30:08.623 "uuid": "b607c7d8-d037-49f0-ad99-9b74c55bf0ef", 00:30:08.623 "numa_id": 1, 00:30:08.623 "assigned_rate_limits": { 00:30:08.623 "rw_ios_per_sec": 0, 00:30:08.623 "rw_mbytes_per_sec": 0, 00:30:08.623 "r_mbytes_per_sec": 0, 00:30:08.623 "w_mbytes_per_sec": 0 00:30:08.623 }, 00:30:08.623 "claimed": false, 00:30:08.623 "zoned": false, 00:30:08.624 "supported_io_types": { 00:30:08.624 "read": true, 00:30:08.624 "write": true, 00:30:08.624 "unmap": false, 00:30:08.624 "flush": true, 00:30:08.624 "reset": true, 00:30:08.624 "nvme_admin": true, 00:30:08.624 "nvme_io": true, 00:30:08.624 "nvme_io_md": false, 00:30:08.624 "write_zeroes": true, 00:30:08.624 "zcopy": false, 00:30:08.624 "get_zone_info": false, 00:30:08.624 "zone_management": false, 00:30:08.624 "zone_append": false, 00:30:08.624 "compare": true, 00:30:08.624 "compare_and_write": true, 00:30:08.624 "abort": true, 00:30:08.624 "seek_hole": false, 00:30:08.624 "seek_data": false, 00:30:08.624 "copy": true, 00:30:08.624 "nvme_iov_md": false 00:30:08.624 }, 00:30:08.624 "memory_domains": [ 00:30:08.624 { 00:30:08.624 "dma_device_id": "system", 00:30:08.624 "dma_device_type": 1 00:30:08.624 } 00:30:08.624 ], 00:30:08.624 "driver_specific": { 00:30:08.624 "nvme": [ 00:30:08.624 { 00:30:08.624 "trid": { 00:30:08.624 "trtype": "TCP", 00:30:08.624 "adrfam": "IPv4", 00:30:08.624 "traddr": "10.0.0.2", 00:30:08.624 "trsvcid": "4420", 00:30:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.624 }, 00:30:08.624 "ctrlr_data": { 00:30:08.624 "cntlid": 2, 00:30:08.624 "vendor_id": "0x8086", 00:30:08.624 "model_number": "SPDK bdev Controller", 00:30:08.624 "serial_number": "00000000000000000000", 00:30:08.624 "firmware_revision": "25.01", 00:30:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.624 "oacs": { 00:30:08.624 "security": 0, 00:30:08.624 "format": 0, 00:30:08.624 "firmware": 0, 00:30:08.624 "ns_manage": 0 00:30:08.624 }, 00:30:08.624 "multi_ctrlr": true, 00:30:08.624 "ana_reporting": false 00:30:08.624 }, 00:30:08.624 "vs": { 00:30:08.624 "nvme_version": "1.3" 00:30:08.624 }, 00:30:08.624 "ns_data": { 00:30:08.624 "id": 1, 00:30:08.624 "can_share": true 00:30:08.624 } 00:30:08.624 } 00:30:08.624 ], 00:30:08.624 "mp_policy": "active_passive" 00:30:08.624 } 00:30:08.624 } 00:30:08.624 ] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hqVi5mbPgI 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hqVi5mbPgI 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.hqVi5mbPgI 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 [2024-12-16 22:35:58.209296] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:08.624 [2024-12-16 22:35:58.209384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 [2024-12-16 22:35:58.229358] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:08.624 nvme0n1 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.624 [ 00:30:08.624 { 00:30:08.624 "name": "nvme0n1", 00:30:08.624 "aliases": [ 00:30:08.624 "b607c7d8-d037-49f0-ad99-9b74c55bf0ef" 00:30:08.624 ], 00:30:08.624 "product_name": "NVMe disk", 00:30:08.624 "block_size": 512, 00:30:08.624 "num_blocks": 2097152, 00:30:08.624 "uuid": "b607c7d8-d037-49f0-ad99-9b74c55bf0ef", 00:30:08.624 "numa_id": 1, 00:30:08.624 "assigned_rate_limits": { 00:30:08.624 "rw_ios_per_sec": 0, 00:30:08.624 "rw_mbytes_per_sec": 0, 00:30:08.624 "r_mbytes_per_sec": 0, 00:30:08.624 "w_mbytes_per_sec": 0 00:30:08.624 }, 00:30:08.624 "claimed": false, 00:30:08.624 "zoned": false, 00:30:08.624 "supported_io_types": { 00:30:08.624 "read": true, 00:30:08.624 "write": true, 00:30:08.624 "unmap": false, 00:30:08.624 "flush": true, 00:30:08.624 "reset": true, 00:30:08.624 "nvme_admin": true, 00:30:08.624 "nvme_io": true, 00:30:08.624 "nvme_io_md": false, 00:30:08.624 "write_zeroes": true, 00:30:08.624 "zcopy": false, 00:30:08.624 "get_zone_info": false, 00:30:08.624 "zone_management": false, 00:30:08.624 "zone_append": false, 00:30:08.624 "compare": true, 00:30:08.624 "compare_and_write": true, 00:30:08.624 "abort": true, 00:30:08.624 "seek_hole": false, 00:30:08.624 "seek_data": false, 00:30:08.624 "copy": true, 00:30:08.624 "nvme_iov_md": false 00:30:08.624 }, 00:30:08.624 "memory_domains": [ 00:30:08.624 { 00:30:08.624 "dma_device_id": "system", 00:30:08.624 "dma_device_type": 1 00:30:08.624 } 00:30:08.624 ], 00:30:08.624 "driver_specific": { 00:30:08.624 "nvme": [ 00:30:08.624 { 00:30:08.624 "trid": { 00:30:08.624 "trtype": "TCP", 00:30:08.624 "adrfam": "IPv4", 00:30:08.624 "traddr": "10.0.0.2", 00:30:08.624 "trsvcid": "4421", 00:30:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.624 }, 00:30:08.624 "ctrlr_data": { 00:30:08.624 "cntlid": 3, 00:30:08.624 "vendor_id": "0x8086", 00:30:08.624 "model_number": "SPDK bdev Controller", 00:30:08.624 "serial_number": "00000000000000000000", 00:30:08.624 "firmware_revision": "25.01", 00:30:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.624 "oacs": { 00:30:08.624 "security": 0, 00:30:08.624 "format": 0, 00:30:08.624 "firmware": 0, 00:30:08.624 "ns_manage": 0 00:30:08.624 }, 00:30:08.624 "multi_ctrlr": true, 00:30:08.624 "ana_reporting": false 00:30:08.624 }, 00:30:08.624 "vs": { 00:30:08.624 "nvme_version": "1.3" 00:30:08.624 }, 00:30:08.624 "ns_data": { 00:30:08.624 "id": 1, 00:30:08.624 "can_share": true 00:30:08.624 } 00:30:08.624 } 00:30:08.624 ], 00:30:08.624 "mp_policy": "active_passive" 00:30:08.624 } 00:30:08.624 } 00:30:08.624 ] 00:30:08.624 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.hqVi5mbPgI 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.884 rmmod nvme_tcp 00:30:08.884 rmmod nvme_fabrics 00:30:08.884 rmmod nvme_keyring 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 445853 ']' 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 445853 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 445853 ']' 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 445853 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445853 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445853' 00:30:08.884 killing process with pid 445853 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 445853 00:30:08.884 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 445853 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.143 22:35:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.046 00:30:11.046 real 0m9.323s 00:30:11.046 user 0m2.998s 00:30:11.046 sys 0m4.745s 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:11.046 ************************************ 00:30:11.046 END TEST nvmf_async_init 00:30:11.046 ************************************ 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.046 22:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.306 ************************************ 00:30:11.306 START TEST dma 00:30:11.306 ************************************ 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:11.306 * Looking for test storage... 00:30:11.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.306 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:11.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.307 --rc genhtml_branch_coverage=1 00:30:11.307 --rc genhtml_function_coverage=1 00:30:11.307 --rc genhtml_legend=1 00:30:11.307 --rc geninfo_all_blocks=1 00:30:11.307 --rc geninfo_unexecuted_blocks=1 00:30:11.307 00:30:11.307 ' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.307 --rc genhtml_branch_coverage=1 00:30:11.307 --rc genhtml_function_coverage=1 00:30:11.307 --rc genhtml_legend=1 00:30:11.307 --rc geninfo_all_blocks=1 00:30:11.307 --rc geninfo_unexecuted_blocks=1 00:30:11.307 00:30:11.307 ' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.307 --rc genhtml_branch_coverage=1 00:30:11.307 --rc genhtml_function_coverage=1 00:30:11.307 --rc genhtml_legend=1 00:30:11.307 --rc geninfo_all_blocks=1 00:30:11.307 --rc geninfo_unexecuted_blocks=1 00:30:11.307 00:30:11.307 ' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.307 --rc genhtml_branch_coverage=1 00:30:11.307 --rc genhtml_function_coverage=1 00:30:11.307 --rc genhtml_legend=1 00:30:11.307 --rc geninfo_all_blocks=1 00:30:11.307 --rc geninfo_unexecuted_blocks=1 00:30:11.307 00:30:11.307 ' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:11.307 00:30:11.307 real 0m0.213s 00:30:11.307 user 0m0.138s 00:30:11.307 sys 0m0.085s 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:11.307 ************************************ 00:30:11.307 END TEST dma 00:30:11.307 ************************************ 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.307 22:36:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.567 ************************************ 00:30:11.567 START TEST nvmf_identify 00:30:11.567 ************************************ 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:11.567 * Looking for test storage... 00:30:11.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.567 --rc genhtml_branch_coverage=1 00:30:11.567 --rc genhtml_function_coverage=1 00:30:11.567 --rc genhtml_legend=1 00:30:11.567 --rc geninfo_all_blocks=1 00:30:11.567 --rc geninfo_unexecuted_blocks=1 00:30:11.567 00:30:11.567 ' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.567 --rc genhtml_branch_coverage=1 00:30:11.567 --rc genhtml_function_coverage=1 00:30:11.567 --rc genhtml_legend=1 00:30:11.567 --rc geninfo_all_blocks=1 00:30:11.567 --rc geninfo_unexecuted_blocks=1 00:30:11.567 00:30:11.567 ' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.567 --rc genhtml_branch_coverage=1 00:30:11.567 --rc genhtml_function_coverage=1 00:30:11.567 --rc genhtml_legend=1 00:30:11.567 --rc geninfo_all_blocks=1 00:30:11.567 --rc geninfo_unexecuted_blocks=1 00:30:11.567 00:30:11.567 ' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:11.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.567 --rc genhtml_branch_coverage=1 00:30:11.567 --rc genhtml_function_coverage=1 00:30:11.567 --rc genhtml_legend=1 00:30:11.567 --rc geninfo_all_blocks=1 00:30:11.567 --rc geninfo_unexecuted_blocks=1 00:30:11.567 00:30:11.567 ' 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.567 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.568 22:36:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.143 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:18.144 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:18.144 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:18.144 Found net devices under 0000:af:00.0: cvl_0_0 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:18.144 Found net devices under 0000:af:00.1: cvl_0_1 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.144 22:36:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:30:18.144 00:30:18.144 --- 10.0.0.2 ping statistics --- 00:30:18.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.144 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:30:18.144 00:30:18.144 --- 10.0.0.1 ping statistics --- 00:30:18.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.144 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.144 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=450031 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 450031 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 450031 ']' 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 [2024-12-16 22:36:07.155815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:18.145 [2024-12-16 22:36:07.155861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.145 [2024-12-16 22:36:07.231786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.145 [2024-12-16 22:36:07.254956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.145 [2024-12-16 22:36:07.254997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.145 [2024-12-16 22:36:07.255005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.145 [2024-12-16 22:36:07.255011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.145 [2024-12-16 22:36:07.255017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.145 [2024-12-16 22:36:07.256337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.145 [2024-12-16 22:36:07.256446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.145 [2024-12-16 22:36:07.256567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.145 [2024-12-16 22:36:07.256568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 [2024-12-16 22:36:07.360657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 Malloc0 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 [2024-12-16 22:36:07.456872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.145 [ 00:30:18.145 { 00:30:18.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:18.145 "subtype": "Discovery", 00:30:18.145 "listen_addresses": [ 00:30:18.145 { 00:30:18.145 "trtype": "TCP", 00:30:18.145 "adrfam": "IPv4", 00:30:18.145 "traddr": "10.0.0.2", 00:30:18.145 "trsvcid": "4420" 00:30:18.145 } 00:30:18.145 ], 00:30:18.145 "allow_any_host": true, 00:30:18.145 "hosts": [] 00:30:18.145 }, 00:30:18.145 { 00:30:18.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.145 "subtype": "NVMe", 00:30:18.145 "listen_addresses": [ 00:30:18.145 { 00:30:18.145 "trtype": "TCP", 00:30:18.145 "adrfam": "IPv4", 00:30:18.145 "traddr": "10.0.0.2", 00:30:18.145 "trsvcid": "4420" 00:30:18.145 } 00:30:18.145 ], 00:30:18.145 "allow_any_host": true, 00:30:18.145 "hosts": [], 00:30:18.145 "serial_number": "SPDK00000000000001", 00:30:18.145 "model_number": "SPDK bdev Controller", 00:30:18.145 "max_namespaces": 32, 00:30:18.145 "min_cntlid": 1, 00:30:18.145 "max_cntlid": 65519, 00:30:18.145 "namespaces": [ 00:30:18.145 { 00:30:18.145 "nsid": 1, 00:30:18.145 "bdev_name": "Malloc0", 00:30:18.145 "name": "Malloc0", 00:30:18.145 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:18.145 "eui64": "ABCDEF0123456789", 00:30:18.145 "uuid": "a255b000-8952-4e51-8ce6-684a249d4bcc" 00:30:18.145 } 00:30:18.145 ] 00:30:18.145 } 00:30:18.145 ] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.145 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:18.145 [2024-12-16 22:36:07.508999] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:18.145 [2024-12-16 22:36:07.509047] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450142 ] 00:30:18.145 [2024-12-16 22:36:07.549434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:18.145 [2024-12-16 22:36:07.549478] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:18.145 [2024-12-16 22:36:07.549483] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:18.145 [2024-12-16 22:36:07.549493] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:18.145 [2024-12-16 22:36:07.549504] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:18.145 [2024-12-16 22:36:07.549963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:18.145 [2024-12-16 22:36:07.549995] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b7ade0 0 00:30:18.145 [2024-12-16 22:36:07.560209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:18.145 [2024-12-16 22:36:07.560223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:18.145 [2024-12-16 22:36:07.560227] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:18.145 [2024-12-16 22:36:07.560230] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:18.145 [2024-12-16 22:36:07.560256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.145 [2024-12-16 22:36:07.560262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.145 [2024-12-16 22:36:07.560266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.145 [2024-12-16 22:36:07.560276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:18.145 [2024-12-16 22:36:07.560293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.145 [2024-12-16 22:36:07.568202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.145 [2024-12-16 22:36:07.568212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.145 [2024-12-16 22:36:07.568215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.145 [2024-12-16 22:36:07.568220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.145 [2024-12-16 22:36:07.568229] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:18.145 [2024-12-16 22:36:07.568236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:18.145 [2024-12-16 22:36:07.568241] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:18.145 [2024-12-16 22:36:07.568251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.145 [2024-12-16 22:36:07.568255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.568265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.568277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.568443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.568449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.568452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.568460] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:18.146 [2024-12-16 22:36:07.568466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:18.146 [2024-12-16 22:36:07.568472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.568485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.568498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.568557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.568562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.568565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.568573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:18.146 [2024-12-16 22:36:07.568580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:18.146 [2024-12-16 22:36:07.568586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.568598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.568607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.568665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.568670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.568673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.568681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:18.146 [2024-12-16 22:36:07.568689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.568701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.568710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.568778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.568783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.568786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.568794] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:18.146 [2024-12-16 22:36:07.568798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:18.146 [2024-12-16 22:36:07.568804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:18.146 [2024-12-16 22:36:07.568912] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:18.146 [2024-12-16 22:36:07.568916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:18.146 [2024-12-16 22:36:07.568924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.568932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.568938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.568947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.569009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.569014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.569017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.569025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:18.146 [2024-12-16 22:36:07.569032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.569045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.569054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.569119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.569124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.569127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.569135] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:18.146 [2024-12-16 22:36:07.569138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:18.146 [2024-12-16 22:36:07.569145] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:18.146 [2024-12-16 22:36:07.569156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:18.146 [2024-12-16 22:36:07.569164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.569172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.146 [2024-12-16 22:36:07.569182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.569267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.146 [2024-12-16 22:36:07.569273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.146 [2024-12-16 22:36:07.569276] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569280] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7ade0): datao=0, datal=4096, cccid=0 00:30:18.146 [2024-12-16 22:36:07.569284] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd5f40) on tqpair(0x1b7ade0): expected_datao=0, payload_size=4096 00:30:18.146 [2024-12-16 22:36:07.569289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569301] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.569305] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.611201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.611211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.611214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.611218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.611225] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:18.146 [2024-12-16 22:36:07.611230] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:18.146 [2024-12-16 22:36:07.611234] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:18.146 [2024-12-16 22:36:07.611238] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:18.146 [2024-12-16 22:36:07.611243] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:18.146 [2024-12-16 22:36:07.611247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:18.146 [2024-12-16 22:36:07.611259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:18.146 [2024-12-16 22:36:07.611268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.611271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.611275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.146 [2024-12-16 22:36:07.611282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:18.146 [2024-12-16 22:36:07.611294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.146 [2024-12-16 22:36:07.611365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.146 [2024-12-16 22:36:07.611370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.146 [2024-12-16 22:36:07.611373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.146 [2024-12-16 22:36:07.611377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.146 [2024-12-16 22:36:07.611384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.147 [2024-12-16 22:36:07.611401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.147 [2024-12-16 22:36:07.611417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.147 [2024-12-16 22:36:07.611433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.147 [2024-12-16 22:36:07.611453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:18.147 [2024-12-16 22:36:07.611463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:18.147 [2024-12-16 22:36:07.611469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.147 [2024-12-16 22:36:07.611490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd5f40, cid 0, qid 0 00:30:18.147 [2024-12-16 22:36:07.611494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd60c0, cid 1, qid 0 00:30:18.147 [2024-12-16 22:36:07.611498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd6240, cid 2, qid 0 00:30:18.147 [2024-12-16 22:36:07.611502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.147 [2024-12-16 22:36:07.611506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd6540, cid 4, qid 0 00:30:18.147 [2024-12-16 22:36:07.611604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.147 [2024-12-16 22:36:07.611610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.147 [2024-12-16 22:36:07.611613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd6540) on tqpair=0x1b7ade0 00:30:18.147 [2024-12-16 22:36:07.611621] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:18.147 [2024-12-16 22:36:07.611625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:18.147 [2024-12-16 22:36:07.611635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.147 [2024-12-16 22:36:07.611654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd6540, cid 4, qid 0 00:30:18.147 [2024-12-16 22:36:07.611718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.147 [2024-12-16 22:36:07.611723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.147 [2024-12-16 22:36:07.611726] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611729] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7ade0): datao=0, datal=4096, cccid=4 00:30:18.147 [2024-12-16 22:36:07.611733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd6540) on tqpair(0x1b7ade0): expected_datao=0, payload_size=4096 00:30:18.147 [2024-12-16 22:36:07.611737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611747] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.147 [2024-12-16 22:36:07.611778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.147 [2024-12-16 22:36:07.611781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd6540) on tqpair=0x1b7ade0 00:30:18.147 [2024-12-16 22:36:07.611797] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:18.147 [2024-12-16 22:36:07.611816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.147 [2024-12-16 22:36:07.611832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.611843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.147 [2024-12-16 22:36:07.611856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd6540, cid 4, qid 0 00:30:18.147 [2024-12-16 22:36:07.611861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd66c0, cid 5, qid 0 00:30:18.147 [2024-12-16 22:36:07.611960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.147 [2024-12-16 22:36:07.611966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.147 [2024-12-16 22:36:07.611969] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611972] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7ade0): datao=0, datal=1024, cccid=4 00:30:18.147 [2024-12-16 22:36:07.611976] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd6540) on tqpair(0x1b7ade0): expected_datao=0, payload_size=1024 00:30:18.147 [2024-12-16 22:36:07.611980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611985] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611989] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.611994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.147 [2024-12-16 22:36:07.611998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.147 [2024-12-16 22:36:07.612001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.612005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd66c0) on tqpair=0x1b7ade0 00:30:18.147 [2024-12-16 22:36:07.652388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.147 [2024-12-16 22:36:07.652399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.147 [2024-12-16 22:36:07.652402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.652406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd6540) on tqpair=0x1b7ade0 00:30:18.147 [2024-12-16 22:36:07.652415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.652419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.652426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.147 [2024-12-16 22:36:07.652441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd6540, cid 4, qid 0 00:30:18.147 [2024-12-16 22:36:07.652518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.147 [2024-12-16 22:36:07.652524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.147 [2024-12-16 22:36:07.652527] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.652530] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7ade0): datao=0, datal=3072, cccid=4 00:30:18.147 [2024-12-16 22:36:07.652537] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd6540) on tqpair(0x1b7ade0): expected_datao=0, payload_size=3072 00:30:18.147 [2024-12-16 22:36:07.652540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.652555] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.652559] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.693260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.147 [2024-12-16 22:36:07.693272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.147 [2024-12-16 22:36:07.693275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.693278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd6540) on tqpair=0x1b7ade0 00:30:18.147 [2024-12-16 22:36:07.693286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.693290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7ade0) 00:30:18.147 [2024-12-16 22:36:07.693296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.147 [2024-12-16 22:36:07.693312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd6540, cid 4, qid 0 00:30:18.147 [2024-12-16 22:36:07.693379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.147 [2024-12-16 22:36:07.693386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.147 [2024-12-16 22:36:07.693389] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.693392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7ade0): datao=0, datal=8, cccid=4 00:30:18.147 [2024-12-16 22:36:07.693396] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd6540) on tqpair(0x1b7ade0): expected_datao=0, payload_size=8 00:30:18.147 [2024-12-16 22:36:07.693400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.693405] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.693409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.738204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.147 [2024-12-16 22:36:07.738213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.147 [2024-12-16 22:36:07.738218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.147 [2024-12-16 22:36:07.738221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd6540) on tqpair=0x1b7ade0 00:30:18.147 ===================================================== 00:30:18.148 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:18.148 ===================================================== 00:30:18.148 Controller Capabilities/Features 00:30:18.148 ================================ 00:30:18.148 Vendor ID: 0000 00:30:18.148 Subsystem Vendor ID: 0000 00:30:18.148 Serial Number: .................... 00:30:18.148 Model Number: ........................................ 00:30:18.148 Firmware Version: 25.01 00:30:18.148 Recommended Arb Burst: 0 00:30:18.148 IEEE OUI Identifier: 00 00 00 00:30:18.148 Multi-path I/O 00:30:18.148 May have multiple subsystem ports: No 00:30:18.148 May have multiple controllers: No 00:30:18.148 Associated with SR-IOV VF: No 00:30:18.148 Max Data Transfer Size: 131072 00:30:18.148 Max Number of Namespaces: 0 00:30:18.148 Max Number of I/O Queues: 1024 00:30:18.148 NVMe Specification Version (VS): 1.3 00:30:18.148 NVMe Specification Version (Identify): 1.3 00:30:18.148 Maximum Queue Entries: 128 00:30:18.148 Contiguous Queues Required: Yes 00:30:18.148 Arbitration Mechanisms Supported 00:30:18.148 Weighted Round Robin: Not Supported 00:30:18.148 Vendor Specific: Not Supported 00:30:18.148 Reset Timeout: 15000 ms 00:30:18.148 Doorbell Stride: 4 bytes 00:30:18.148 NVM Subsystem Reset: Not Supported 00:30:18.148 Command Sets Supported 00:30:18.148 NVM Command Set: Supported 00:30:18.148 Boot Partition: Not Supported 00:30:18.148 Memory Page Size Minimum: 4096 bytes 00:30:18.148 Memory Page Size Maximum: 4096 bytes 00:30:18.148 Persistent Memory Region: Not Supported 00:30:18.148 Optional Asynchronous Events Supported 00:30:18.148 Namespace Attribute Notices: Not Supported 00:30:18.148 Firmware Activation Notices: Not Supported 00:30:18.148 ANA Change Notices: Not Supported 00:30:18.148 PLE Aggregate Log Change Notices: Not Supported 00:30:18.148 LBA Status Info Alert Notices: Not Supported 00:30:18.148 EGE Aggregate Log Change Notices: Not Supported 00:30:18.148 Normal NVM Subsystem Shutdown event: Not Supported 00:30:18.148 Zone Descriptor Change Notices: Not Supported 00:30:18.148 Discovery Log Change Notices: Supported 00:30:18.148 Controller Attributes 00:30:18.148 128-bit Host Identifier: Not Supported 00:30:18.148 Non-Operational Permissive Mode: Not Supported 00:30:18.148 NVM Sets: Not Supported 00:30:18.148 Read Recovery Levels: Not Supported 00:30:18.148 Endurance Groups: Not Supported 00:30:18.148 Predictable Latency Mode: Not Supported 00:30:18.148 Traffic Based Keep ALive: Not Supported 00:30:18.148 Namespace Granularity: Not Supported 00:30:18.148 SQ Associations: Not Supported 00:30:18.148 UUID List: Not Supported 00:30:18.148 Multi-Domain Subsystem: Not Supported 00:30:18.148 Fixed Capacity Management: Not Supported 00:30:18.148 Variable Capacity Management: Not Supported 00:30:18.148 Delete Endurance Group: Not Supported 00:30:18.148 Delete NVM Set: Not Supported 00:30:18.148 Extended LBA Formats Supported: Not Supported 00:30:18.148 Flexible Data Placement Supported: Not Supported 00:30:18.148 00:30:18.148 Controller Memory Buffer Support 00:30:18.148 ================================ 00:30:18.148 Supported: No 00:30:18.148 00:30:18.148 Persistent Memory Region Support 00:30:18.148 ================================ 00:30:18.148 Supported: No 00:30:18.148 00:30:18.148 Admin Command Set Attributes 00:30:18.148 ============================ 00:30:18.148 Security Send/Receive: Not Supported 00:30:18.148 Format NVM: Not Supported 00:30:18.148 Firmware Activate/Download: Not Supported 00:30:18.148 Namespace Management: Not Supported 00:30:18.148 Device Self-Test: Not Supported 00:30:18.148 Directives: Not Supported 00:30:18.148 NVMe-MI: Not Supported 00:30:18.148 Virtualization Management: Not Supported 00:30:18.148 Doorbell Buffer Config: Not Supported 00:30:18.148 Get LBA Status Capability: Not Supported 00:30:18.148 Command & Feature Lockdown Capability: Not Supported 00:30:18.148 Abort Command Limit: 1 00:30:18.148 Async Event Request Limit: 4 00:30:18.148 Number of Firmware Slots: N/A 00:30:18.148 Firmware Slot 1 Read-Only: N/A 00:30:18.148 Firmware Activation Without Reset: N/A 00:30:18.148 Multiple Update Detection Support: N/A 00:30:18.148 Firmware Update Granularity: No Information Provided 00:30:18.148 Per-Namespace SMART Log: No 00:30:18.148 Asymmetric Namespace Access Log Page: Not Supported 00:30:18.148 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:18.148 Command Effects Log Page: Not Supported 00:30:18.148 Get Log Page Extended Data: Supported 00:30:18.148 Telemetry Log Pages: Not Supported 00:30:18.148 Persistent Event Log Pages: Not Supported 00:30:18.148 Supported Log Pages Log Page: May Support 00:30:18.148 Commands Supported & Effects Log Page: Not Supported 00:30:18.148 Feature Identifiers & Effects Log Page:May Support 00:30:18.148 NVMe-MI Commands & Effects Log Page: May Support 00:30:18.148 Data Area 4 for Telemetry Log: Not Supported 00:30:18.148 Error Log Page Entries Supported: 128 00:30:18.148 Keep Alive: Not Supported 00:30:18.148 00:30:18.148 NVM Command Set Attributes 00:30:18.148 ========================== 00:30:18.148 Submission Queue Entry Size 00:30:18.148 Max: 1 00:30:18.148 Min: 1 00:30:18.148 Completion Queue Entry Size 00:30:18.148 Max: 1 00:30:18.148 Min: 1 00:30:18.148 Number of Namespaces: 0 00:30:18.148 Compare Command: Not Supported 00:30:18.148 Write Uncorrectable Command: Not Supported 00:30:18.148 Dataset Management Command: Not Supported 00:30:18.148 Write Zeroes Command: Not Supported 00:30:18.148 Set Features Save Field: Not Supported 00:30:18.148 Reservations: Not Supported 00:30:18.148 Timestamp: Not Supported 00:30:18.148 Copy: Not Supported 00:30:18.148 Volatile Write Cache: Not Present 00:30:18.148 Atomic Write Unit (Normal): 1 00:30:18.148 Atomic Write Unit (PFail): 1 00:30:18.148 Atomic Compare & Write Unit: 1 00:30:18.148 Fused Compare & Write: Supported 00:30:18.148 Scatter-Gather List 00:30:18.148 SGL Command Set: Supported 00:30:18.148 SGL Keyed: Supported 00:30:18.148 SGL Bit Bucket Descriptor: Not Supported 00:30:18.148 SGL Metadata Pointer: Not Supported 00:30:18.148 Oversized SGL: Not Supported 00:30:18.148 SGL Metadata Address: Not Supported 00:30:18.148 SGL Offset: Supported 00:30:18.148 Transport SGL Data Block: Not Supported 00:30:18.148 Replay Protected Memory Block: Not Supported 00:30:18.148 00:30:18.148 Firmware Slot Information 00:30:18.148 ========================= 00:30:18.148 Active slot: 0 00:30:18.148 00:30:18.148 00:30:18.148 Error Log 00:30:18.148 ========= 00:30:18.148 00:30:18.148 Active Namespaces 00:30:18.148 ================= 00:30:18.148 Discovery Log Page 00:30:18.148 ================== 00:30:18.148 Generation Counter: 2 00:30:18.148 Number of Records: 2 00:30:18.148 Record Format: 0 00:30:18.148 00:30:18.148 Discovery Log Entry 0 00:30:18.148 ---------------------- 00:30:18.148 Transport Type: 3 (TCP) 00:30:18.148 Address Family: 1 (IPv4) 00:30:18.148 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:18.148 Entry Flags: 00:30:18.148 Duplicate Returned Information: 1 00:30:18.148 Explicit Persistent Connection Support for Discovery: 1 00:30:18.148 Transport Requirements: 00:30:18.148 Secure Channel: Not Required 00:30:18.148 Port ID: 0 (0x0000) 00:30:18.148 Controller ID: 65535 (0xffff) 00:30:18.148 Admin Max SQ Size: 128 00:30:18.148 Transport Service Identifier: 4420 00:30:18.148 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:18.148 Transport Address: 10.0.0.2 00:30:18.149 Discovery Log Entry 1 00:30:18.149 ---------------------- 00:30:18.149 Transport Type: 3 (TCP) 00:30:18.149 Address Family: 1 (IPv4) 00:30:18.149 Subsystem Type: 2 (NVM Subsystem) 00:30:18.149 Entry Flags: 00:30:18.149 Duplicate Returned Information: 0 00:30:18.149 Explicit Persistent Connection Support for Discovery: 0 00:30:18.149 Transport Requirements: 00:30:18.149 Secure Channel: Not Required 00:30:18.149 Port ID: 0 (0x0000) 00:30:18.149 Controller ID: 65535 (0xffff) 00:30:18.149 Admin Max SQ Size: 128 00:30:18.149 Transport Service Identifier: 4420 00:30:18.149 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:18.149 Transport Address: 10.0.0.2 [2024-12-16 22:36:07.738300] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:18.149 [2024-12-16 22:36:07.738311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd5f40) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.149 [2024-12-16 22:36:07.738323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd60c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.149 [2024-12-16 22:36:07.738332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd6240) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.149 [2024-12-16 22:36:07.738341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.149 [2024-12-16 22:36:07.738352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.738367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.738381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.738446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.738452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.738455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.738476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.738489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.738560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.738566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.738568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738576] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:18.149 [2024-12-16 22:36:07.738580] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:18.149 [2024-12-16 22:36:07.738588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.738600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.738610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.738669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.738675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.738678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.738702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.738711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.738768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.738773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.738777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.738802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.738811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.738881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.738887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.738890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.738902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.738915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.738924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.738984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.738989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.738992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.738996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.739003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.739016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.739025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.739085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.739091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.739094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.739105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.739117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.739127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.739197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.739203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.739206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.739217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.739231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.739240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.739300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.149 [2024-12-16 22:36:07.739305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.149 [2024-12-16 22:36:07.739308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.149 [2024-12-16 22:36:07.739319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.149 [2024-12-16 22:36:07.739326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.149 [2024-12-16 22:36:07.739331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.149 [2024-12-16 22:36:07.739341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.149 [2024-12-16 22:36:07.739408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.739413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.739416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.739428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.739440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.739450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.739508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.739514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.739517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.739529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.739541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.739550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.739617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.739623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.739626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.739637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.739652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.739662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.739721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.739726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.739729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.739740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.739753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.739762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.739830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.739835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.739838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.739850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.739862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.739872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.739930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.739936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.739939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.739950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.739957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.739962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.739971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.740046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.740060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.740069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.740146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.740158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.740167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.740248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.740260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.740270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.740357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.740369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.740378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.740461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.740473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.740485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.150 [2024-12-16 22:36:07.740565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.150 [2024-12-16 22:36:07.740571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.150 [2024-12-16 22:36:07.740577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.150 [2024-12-16 22:36:07.740586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.150 [2024-12-16 22:36:07.740643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.150 [2024-12-16 22:36:07.740649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.150 [2024-12-16 22:36:07.740652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.740663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.740675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.740685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.740748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.740753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.740756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.740767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.740779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.740788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.740846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.740851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.740855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.740866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.740878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.740887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.740953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.740958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.740961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.740973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.740980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.740985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.740995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.741053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.741058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.741061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.741072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.741084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.741093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.741153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.741159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.741162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.741173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.741185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.741200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.741259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.741264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.741267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.741279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.741290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.741300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.741367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.741374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.741378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.741389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.741402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.741411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.741472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.741477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.741480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.741492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.741499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.741504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.741514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.744275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.744286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.744289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.744292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.744304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.744349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.744352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7ade0) 00:30:18.151 [2024-12-16 22:36:07.744359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.151 [2024-12-16 22:36:07.744372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd63c0, cid 3, qid 0 00:30:18.151 [2024-12-16 22:36:07.744502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.151 [2024-12-16 22:36:07.744508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.151 [2024-12-16 22:36:07.744512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.151 [2024-12-16 22:36:07.744515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd63c0) on tqpair=0x1b7ade0 00:30:18.151 [2024-12-16 22:36:07.744522] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:18.151 00:30:18.151 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:18.151 [2024-12-16 22:36:07.772343] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:18.151 [2024-12-16 22:36:07.772376] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450295 ] 00:30:18.151 [2024-12-16 22:36:07.810215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:18.151 [2024-12-16 22:36:07.810254] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:18.152 [2024-12-16 22:36:07.810259] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:18.152 [2024-12-16 22:36:07.810268] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:18.152 [2024-12-16 22:36:07.810275] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:18.152 [2024-12-16 22:36:07.814340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:18.152 [2024-12-16 22:36:07.814367] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x121dde0 0 00:30:18.152 [2024-12-16 22:36:07.821208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:18.152 [2024-12-16 22:36:07.821222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:18.152 [2024-12-16 22:36:07.821226] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:18.152 [2024-12-16 22:36:07.821229] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:18.152 [2024-12-16 22:36:07.821252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.821257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.821261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.821271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:18.152 [2024-12-16 22:36:07.821287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.828202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.828211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.828214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.828227] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:18.152 [2024-12-16 22:36:07.828233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:18.152 [2024-12-16 22:36:07.828237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:18.152 [2024-12-16 22:36:07.828247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.828261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.828273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.828355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.828361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.828364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.828371] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:18.152 [2024-12-16 22:36:07.828380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:18.152 [2024-12-16 22:36:07.828387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.828399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.828410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.828471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.828477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.828480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.828487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:18.152 [2024-12-16 22:36:07.828494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:18.152 [2024-12-16 22:36:07.828500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.828512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.828522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.828580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.828586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.828589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.828597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:18.152 [2024-12-16 22:36:07.828605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.828617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.828627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.828685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.828690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.828693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.828700] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:18.152 [2024-12-16 22:36:07.828705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:18.152 [2024-12-16 22:36:07.828711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:18.152 [2024-12-16 22:36:07.828821] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:18.152 [2024-12-16 22:36:07.828826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:18.152 [2024-12-16 22:36:07.828832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.828844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.828854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.828913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.828919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.828921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.828929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:18.152 [2024-12-16 22:36:07.828937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.828943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.828949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.828958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.829018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.152 [2024-12-16 22:36:07.829024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.152 [2024-12-16 22:36:07.829027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.829030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.152 [2024-12-16 22:36:07.829034] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:18.152 [2024-12-16 22:36:07.829038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:18.152 [2024-12-16 22:36:07.829044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:18.152 [2024-12-16 22:36:07.829055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:18.152 [2024-12-16 22:36:07.829062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.829065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.152 [2024-12-16 22:36:07.829070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.152 [2024-12-16 22:36:07.829080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.152 [2024-12-16 22:36:07.829170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.152 [2024-12-16 22:36:07.829176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.152 [2024-12-16 22:36:07.829179] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.829184] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=4096, cccid=0 00:30:18.152 [2024-12-16 22:36:07.829189] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1278f40) on tqpair(0x121dde0): expected_datao=0, payload_size=4096 00:30:18.152 [2024-12-16 22:36:07.829203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.829210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.829213] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.152 [2024-12-16 22:36:07.829220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.153 [2024-12-16 22:36:07.829225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.153 [2024-12-16 22:36:07.829228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.153 [2024-12-16 22:36:07.829238] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:18.153 [2024-12-16 22:36:07.829242] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:18.153 [2024-12-16 22:36:07.829246] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:18.153 [2024-12-16 22:36:07.829249] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:18.153 [2024-12-16 22:36:07.829253] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:18.153 [2024-12-16 22:36:07.829257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:18.153 [2024-12-16 22:36:07.829298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.153 [2024-12-16 22:36:07.829362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.153 [2024-12-16 22:36:07.829368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.153 [2024-12-16 22:36:07.829371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.153 [2024-12-16 22:36:07.829379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.153 [2024-12-16 22:36:07.829395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.153 [2024-12-16 22:36:07.829412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.153 [2024-12-16 22:36:07.829430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.153 [2024-12-16 22:36:07.829445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.153 [2024-12-16 22:36:07.829480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1278f40, cid 0, qid 0 00:30:18.153 [2024-12-16 22:36:07.829484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12790c0, cid 1, qid 0 00:30:18.153 [2024-12-16 22:36:07.829488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279240, cid 2, qid 0 00:30:18.153 [2024-12-16 22:36:07.829492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.153 [2024-12-16 22:36:07.829496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.153 [2024-12-16 22:36:07.829589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.153 [2024-12-16 22:36:07.829594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.153 [2024-12-16 22:36:07.829597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.153 [2024-12-16 22:36:07.829604] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:18.153 [2024-12-16 22:36:07.829609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:18.153 [2024-12-16 22:36:07.829653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.153 [2024-12-16 22:36:07.829717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.153 [2024-12-16 22:36:07.829723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.153 [2024-12-16 22:36:07.829728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.153 [2024-12-16 22:36:07.829780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:18.153 [2024-12-16 22:36:07.829795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.153 [2024-12-16 22:36:07.829804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.153 [2024-12-16 22:36:07.829814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.153 [2024-12-16 22:36:07.829886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.153 [2024-12-16 22:36:07.829891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.153 [2024-12-16 22:36:07.829895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.153 [2024-12-16 22:36:07.829898] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=4096, cccid=4 00:30:18.154 [2024-12-16 22:36:07.829901] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1279540) on tqpair(0x121dde0): expected_datao=0, payload_size=4096 00:30:18.154 [2024-12-16 22:36:07.829905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.154 [2024-12-16 22:36:07.829916] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.154 [2024-12-16 22:36:07.829920] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.416 [2024-12-16 22:36:07.870324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.416 [2024-12-16 22:36:07.870327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.416 [2024-12-16 22:36:07.870345] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:18.416 [2024-12-16 22:36:07.870354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:18.416 [2024-12-16 22:36:07.870363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:18.416 [2024-12-16 22:36:07.870370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.416 [2024-12-16 22:36:07.870380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.416 [2024-12-16 22:36:07.870392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.416 [2024-12-16 22:36:07.870475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.416 [2024-12-16 22:36:07.870481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.416 [2024-12-16 22:36:07.870484] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870487] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=4096, cccid=4 00:30:18.416 [2024-12-16 22:36:07.870491] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1279540) on tqpair(0x121dde0): expected_datao=0, payload_size=4096 00:30:18.416 [2024-12-16 22:36:07.870495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870501] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870507] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.416 [2024-12-16 22:36:07.870532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.416 [2024-12-16 22:36:07.870535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.416 [2024-12-16 22:36:07.870539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.416 [2024-12-16 22:36:07.870549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:18.416 [2024-12-16 22:36:07.870558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:18.416 [2024-12-16 22:36:07.870564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.870573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.870584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.417 [2024-12-16 22:36:07.870663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.417 [2024-12-16 22:36:07.870669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.417 [2024-12-16 22:36:07.870672] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870675] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=4096, cccid=4 00:30:18.417 [2024-12-16 22:36:07.870679] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1279540) on tqpair(0x121dde0): expected_datao=0, payload_size=4096 00:30:18.417 [2024-12-16 22:36:07.870683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870688] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870691] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.417 [2024-12-16 22:36:07.870709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.417 [2024-12-16 22:36:07.870711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.417 [2024-12-16 22:36:07.870721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870754] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:18.417 [2024-12-16 22:36:07.870759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:18.417 [2024-12-16 22:36:07.870765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:18.417 [2024-12-16 22:36:07.870776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.870786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.870792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.870803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.417 [2024-12-16 22:36:07.870815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.417 [2024-12-16 22:36:07.870820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12796c0, cid 5, qid 0 00:30:18.417 [2024-12-16 22:36:07.870890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.417 [2024-12-16 22:36:07.870896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.417 [2024-12-16 22:36:07.870898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.417 [2024-12-16 22:36:07.870907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.417 [2024-12-16 22:36:07.870912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.417 [2024-12-16 22:36:07.870915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12796c0) on tqpair=0x121dde0 00:30:18.417 [2024-12-16 22:36:07.870926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.870930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.870935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.870944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12796c0, cid 5, qid 0 00:30:18.417 [2024-12-16 22:36:07.871004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.417 [2024-12-16 22:36:07.871010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.417 [2024-12-16 22:36:07.871013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.871016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12796c0) on tqpair=0x121dde0 00:30:18.417 [2024-12-16 22:36:07.871024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.871027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.871033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.871041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12796c0, cid 5, qid 0 00:30:18.417 [2024-12-16 22:36:07.871117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.417 [2024-12-16 22:36:07.871122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.417 [2024-12-16 22:36:07.871125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.871128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12796c0) on tqpair=0x121dde0 00:30:18.417 [2024-12-16 22:36:07.871137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.871140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.871147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.871157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12796c0, cid 5, qid 0 00:30:18.417 [2024-12-16 22:36:07.875201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.417 [2024-12-16 22:36:07.875208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.417 [2024-12-16 22:36:07.875211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12796c0) on tqpair=0x121dde0 00:30:18.417 [2024-12-16 22:36:07.875228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.875238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.875244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.875252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.875259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.875267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.875274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121dde0) 00:30:18.417 [2024-12-16 22:36:07.875282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.417 [2024-12-16 22:36:07.875294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12796c0, cid 5, qid 0 00:30:18.417 [2024-12-16 22:36:07.875298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279540, cid 4, qid 0 00:30:18.417 [2024-12-16 22:36:07.875302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1279840, cid 6, qid 0 00:30:18.417 [2024-12-16 22:36:07.875306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12799c0, cid 7, qid 0 00:30:18.417 [2024-12-16 22:36:07.875552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.417 [2024-12-16 22:36:07.875557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.417 [2024-12-16 22:36:07.875560] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875563] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=8192, cccid=5 00:30:18.417 [2024-12-16 22:36:07.875567] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12796c0) on tqpair(0x121dde0): expected_datao=0, payload_size=8192 00:30:18.417 [2024-12-16 22:36:07.875571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875598] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875602] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.417 [2024-12-16 22:36:07.875611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.417 [2024-12-16 22:36:07.875614] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875617] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=512, cccid=4 00:30:18.417 [2024-12-16 22:36:07.875623] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1279540) on tqpair(0x121dde0): expected_datao=0, payload_size=512 00:30:18.417 [2024-12-16 22:36:07.875627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875632] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875635] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.417 [2024-12-16 22:36:07.875644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.417 [2024-12-16 22:36:07.875647] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.417 [2024-12-16 22:36:07.875650] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=512, cccid=6 00:30:18.417 [2024-12-16 22:36:07.875654] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1279840) on tqpair(0x121dde0): expected_datao=0, payload_size=512 00:30:18.417 [2024-12-16 22:36:07.875658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875663] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875666] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:18.418 [2024-12-16 22:36:07.875675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:18.418 [2024-12-16 22:36:07.875678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875681] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121dde0): datao=0, datal=4096, cccid=7 00:30:18.418 [2024-12-16 22:36:07.875685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12799c0) on tqpair(0x121dde0): expected_datao=0, payload_size=4096 00:30:18.418 [2024-12-16 22:36:07.875689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875694] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875697] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.418 [2024-12-16 22:36:07.875709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.418 [2024-12-16 22:36:07.875712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12796c0) on tqpair=0x121dde0 00:30:18.418 [2024-12-16 22:36:07.875726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.418 [2024-12-16 22:36:07.875731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.418 [2024-12-16 22:36:07.875734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279540) on tqpair=0x121dde0 00:30:18.418 [2024-12-16 22:36:07.875748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.418 [2024-12-16 22:36:07.875753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.418 [2024-12-16 22:36:07.875756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279840) on tqpair=0x121dde0 00:30:18.418 [2024-12-16 22:36:07.875765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.418 [2024-12-16 22:36:07.875770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.418 [2024-12-16 22:36:07.875773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.418 [2024-12-16 22:36:07.875776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12799c0) on tqpair=0x121dde0 00:30:18.418 ===================================================== 00:30:18.418 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.418 ===================================================== 00:30:18.418 Controller Capabilities/Features 00:30:18.418 ================================ 00:30:18.418 Vendor ID: 8086 00:30:18.418 Subsystem Vendor ID: 8086 00:30:18.418 Serial Number: SPDK00000000000001 00:30:18.418 Model Number: SPDK bdev Controller 00:30:18.418 Firmware Version: 25.01 00:30:18.418 Recommended Arb Burst: 6 00:30:18.418 IEEE OUI Identifier: e4 d2 5c 00:30:18.418 Multi-path I/O 00:30:18.418 May have multiple subsystem ports: Yes 00:30:18.418 May have multiple controllers: Yes 00:30:18.418 Associated with SR-IOV VF: No 00:30:18.418 Max Data Transfer Size: 131072 00:30:18.418 Max Number of Namespaces: 32 00:30:18.418 Max Number of I/O Queues: 127 00:30:18.418 NVMe Specification Version (VS): 1.3 00:30:18.418 NVMe Specification Version (Identify): 1.3 00:30:18.418 Maximum Queue Entries: 128 00:30:18.418 Contiguous Queues Required: Yes 00:30:18.418 Arbitration Mechanisms Supported 00:30:18.418 Weighted Round Robin: Not Supported 00:30:18.418 Vendor Specific: Not Supported 00:30:18.418 Reset Timeout: 15000 ms 00:30:18.418 Doorbell Stride: 4 bytes 00:30:18.418 NVM Subsystem Reset: Not Supported 00:30:18.418 Command Sets Supported 00:30:18.418 NVM Command Set: Supported 00:30:18.418 Boot Partition: Not Supported 00:30:18.418 Memory Page Size Minimum: 4096 bytes 00:30:18.418 Memory Page Size Maximum: 4096 bytes 00:30:18.418 Persistent Memory Region: Not Supported 00:30:18.418 Optional Asynchronous Events Supported 00:30:18.418 Namespace Attribute Notices: Supported 00:30:18.418 Firmware Activation Notices: Not Supported 00:30:18.418 ANA Change Notices: Not Supported 00:30:18.418 PLE Aggregate Log Change Notices: Not Supported 00:30:18.418 LBA Status Info Alert Notices: Not Supported 00:30:18.418 EGE Aggregate Log Change Notices: Not Supported 00:30:18.418 Normal NVM Subsystem Shutdown event: Not Supported 00:30:18.418 Zone Descriptor Change Notices: Not Supported 00:30:18.418 Discovery Log Change Notices: Not Supported 00:30:18.418 Controller Attributes 00:30:18.418 128-bit Host Identifier: Supported 00:30:18.418 Non-Operational Permissive Mode: Not Supported 00:30:18.418 NVM Sets: Not Supported 00:30:18.418 Read Recovery Levels: Not Supported 00:30:18.418 Endurance Groups: Not Supported 00:30:18.418 Predictable Latency Mode: Not Supported 00:30:18.418 Traffic Based Keep ALive: Not Supported 00:30:18.418 Namespace Granularity: Not Supported 00:30:18.418 SQ Associations: Not Supported 00:30:18.418 UUID List: Not Supported 00:30:18.418 Multi-Domain Subsystem: Not Supported 00:30:18.418 Fixed Capacity Management: Not Supported 00:30:18.418 Variable Capacity Management: Not Supported 00:30:18.418 Delete Endurance Group: Not Supported 00:30:18.418 Delete NVM Set: Not Supported 00:30:18.418 Extended LBA Formats Supported: Not Supported 00:30:18.418 Flexible Data Placement Supported: Not Supported 00:30:18.418 00:30:18.418 Controller Memory Buffer Support 00:30:18.418 ================================ 00:30:18.418 Supported: No 00:30:18.418 00:30:18.418 Persistent Memory Region Support 00:30:18.418 ================================ 00:30:18.418 Supported: No 00:30:18.418 00:30:18.418 Admin Command Set Attributes 00:30:18.418 ============================ 00:30:18.418 Security Send/Receive: Not Supported 00:30:18.418 Format NVM: Not Supported 00:30:18.418 Firmware Activate/Download: Not Supported 00:30:18.418 Namespace Management: Not Supported 00:30:18.418 Device Self-Test: Not Supported 00:30:18.418 Directives: Not Supported 00:30:18.418 NVMe-MI: Not Supported 00:30:18.418 Virtualization Management: Not Supported 00:30:18.418 Doorbell Buffer Config: Not Supported 00:30:18.418 Get LBA Status Capability: Not Supported 00:30:18.418 Command & Feature Lockdown Capability: Not Supported 00:30:18.418 Abort Command Limit: 4 00:30:18.418 Async Event Request Limit: 4 00:30:18.418 Number of Firmware Slots: N/A 00:30:18.418 Firmware Slot 1 Read-Only: N/A 00:30:18.418 Firmware Activation Without Reset: N/A 00:30:18.418 Multiple Update Detection Support: N/A 00:30:18.418 Firmware Update Granularity: No Information Provided 00:30:18.418 Per-Namespace SMART Log: No 00:30:18.418 Asymmetric Namespace Access Log Page: Not Supported 00:30:18.418 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:18.418 Command Effects Log Page: Supported 00:30:18.418 Get Log Page Extended Data: Supported 00:30:18.418 Telemetry Log Pages: Not Supported 00:30:18.418 Persistent Event Log Pages: Not Supported 00:30:18.418 Supported Log Pages Log Page: May Support 00:30:18.418 Commands Supported & Effects Log Page: Not Supported 00:30:18.418 Feature Identifiers & Effects Log Page:May Support 00:30:18.418 NVMe-MI Commands & Effects Log Page: May Support 00:30:18.418 Data Area 4 for Telemetry Log: Not Supported 00:30:18.418 Error Log Page Entries Supported: 128 00:30:18.418 Keep Alive: Supported 00:30:18.418 Keep Alive Granularity: 10000 ms 00:30:18.418 00:30:18.418 NVM Command Set Attributes 00:30:18.418 ========================== 00:30:18.418 Submission Queue Entry Size 00:30:18.418 Max: 64 00:30:18.418 Min: 64 00:30:18.418 Completion Queue Entry Size 00:30:18.418 Max: 16 00:30:18.418 Min: 16 00:30:18.418 Number of Namespaces: 32 00:30:18.418 Compare Command: Supported 00:30:18.418 Write Uncorrectable Command: Not Supported 00:30:18.418 Dataset Management Command: Supported 00:30:18.418 Write Zeroes Command: Supported 00:30:18.418 Set Features Save Field: Not Supported 00:30:18.418 Reservations: Supported 00:30:18.418 Timestamp: Not Supported 00:30:18.418 Copy: Supported 00:30:18.418 Volatile Write Cache: Present 00:30:18.418 Atomic Write Unit (Normal): 1 00:30:18.418 Atomic Write Unit (PFail): 1 00:30:18.418 Atomic Compare & Write Unit: 1 00:30:18.418 Fused Compare & Write: Supported 00:30:18.418 Scatter-Gather List 00:30:18.418 SGL Command Set: Supported 00:30:18.418 SGL Keyed: Supported 00:30:18.418 SGL Bit Bucket Descriptor: Not Supported 00:30:18.418 SGL Metadata Pointer: Not Supported 00:30:18.418 Oversized SGL: Not Supported 00:30:18.418 SGL Metadata Address: Not Supported 00:30:18.418 SGL Offset: Supported 00:30:18.418 Transport SGL Data Block: Not Supported 00:30:18.418 Replay Protected Memory Block: Not Supported 00:30:18.418 00:30:18.418 Firmware Slot Information 00:30:18.418 ========================= 00:30:18.418 Active slot: 1 00:30:18.418 Slot 1 Firmware Revision: 25.01 00:30:18.418 00:30:18.418 00:30:18.418 Commands Supported and Effects 00:30:18.418 ============================== 00:30:18.418 Admin Commands 00:30:18.418 -------------- 00:30:18.418 Get Log Page (02h): Supported 00:30:18.418 Identify (06h): Supported 00:30:18.418 Abort (08h): Supported 00:30:18.418 Set Features (09h): Supported 00:30:18.418 Get Features (0Ah): Supported 00:30:18.418 Asynchronous Event Request (0Ch): Supported 00:30:18.418 Keep Alive (18h): Supported 00:30:18.418 I/O Commands 00:30:18.418 ------------ 00:30:18.419 Flush (00h): Supported LBA-Change 00:30:18.419 Write (01h): Supported LBA-Change 00:30:18.419 Read (02h): Supported 00:30:18.419 Compare (05h): Supported 00:30:18.419 Write Zeroes (08h): Supported LBA-Change 00:30:18.419 Dataset Management (09h): Supported LBA-Change 00:30:18.419 Copy (19h): Supported LBA-Change 00:30:18.419 00:30:18.419 Error Log 00:30:18.419 ========= 00:30:18.419 00:30:18.419 Arbitration 00:30:18.419 =========== 00:30:18.419 Arbitration Burst: 1 00:30:18.419 00:30:18.419 Power Management 00:30:18.419 ================ 00:30:18.419 Number of Power States: 1 00:30:18.419 Current Power State: Power State #0 00:30:18.419 Power State #0: 00:30:18.419 Max Power: 0.00 W 00:30:18.419 Non-Operational State: Operational 00:30:18.419 Entry Latency: Not Reported 00:30:18.419 Exit Latency: Not Reported 00:30:18.419 Relative Read Throughput: 0 00:30:18.419 Relative Read Latency: 0 00:30:18.419 Relative Write Throughput: 0 00:30:18.419 Relative Write Latency: 0 00:30:18.419 Idle Power: Not Reported 00:30:18.419 Active Power: Not Reported 00:30:18.419 Non-Operational Permissive Mode: Not Supported 00:30:18.419 00:30:18.419 Health Information 00:30:18.419 ================== 00:30:18.419 Critical Warnings: 00:30:18.419 Available Spare Space: OK 00:30:18.419 Temperature: OK 00:30:18.419 Device Reliability: OK 00:30:18.419 Read Only: No 00:30:18.419 Volatile Memory Backup: OK 00:30:18.419 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:18.419 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:18.419 Available Spare: 0% 00:30:18.419 Available Spare Threshold: 0% 00:30:18.419 Life Percentage Used:[2024-12-16 22:36:07.875854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.875859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.875866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.875876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12799c0, cid 7, qid 0 00:30:18.419 [2024-12-16 22:36:07.875949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.875955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.875957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.875961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12799c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.875987] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:18.419 [2024-12-16 22:36:07.875995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1278f40) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.419 [2024-12-16 22:36:07.876005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12790c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.419 [2024-12-16 22:36:07.876014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1279240) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.419 [2024-12-16 22:36:07.876022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.419 [2024-12-16 22:36:07.876032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876250] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:18.419 [2024-12-16 22:36:07.876256] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:18.419 [2024-12-16 22:36:07.876264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.419 [2024-12-16 22:36:07.876751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.419 [2024-12-16 22:36:07.876756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.419 [2024-12-16 22:36:07.876759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.419 [2024-12-16 22:36:07.876771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.419 [2024-12-16 22:36:07.876777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.419 [2024-12-16 22:36:07.876783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.419 [2024-12-16 22:36:07.876793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.876850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.876856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.876859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.876862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.876870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.876873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.876877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.876882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.876891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.876950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.876956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.876958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.876962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.876970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.876973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.876976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.876982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.876991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.877903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.877913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.877972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.420 [2024-12-16 22:36:07.877978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.420 [2024-12-16 22:36:07.877981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.420 [2024-12-16 22:36:07.877992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.420 [2024-12-16 22:36:07.877999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.420 [2024-12-16 22:36:07.878004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.420 [2024-12-16 22:36:07.878013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.420 [2024-12-16 22:36:07.878072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.878902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.878907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.878911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.878921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.878928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.878934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.878942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.879010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.879016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.879019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.879022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.879030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.879034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.879037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.421 [2024-12-16 22:36:07.879042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.421 [2024-12-16 22:36:07.879052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.421 [2024-12-16 22:36:07.879112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.421 [2024-12-16 22:36:07.879117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.421 [2024-12-16 22:36:07.879120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.879123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.421 [2024-12-16 22:36:07.879131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.421 [2024-12-16 22:36:07.879135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.422 [2024-12-16 22:36:07.879138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.422 [2024-12-16 22:36:07.879143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.422 [2024-12-16 22:36:07.879153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.422 [2024-12-16 22:36:07.883199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.422 [2024-12-16 22:36:07.883207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.422 [2024-12-16 22:36:07.883210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.422 [2024-12-16 22:36:07.883213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.422 [2024-12-16 22:36:07.883223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:18.422 [2024-12-16 22:36:07.883226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:18.422 [2024-12-16 22:36:07.883230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121dde0) 00:30:18.422 [2024-12-16 22:36:07.883235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.422 [2024-12-16 22:36:07.883246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12793c0, cid 3, qid 0 00:30:18.422 [2024-12-16 22:36:07.883394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:18.422 [2024-12-16 22:36:07.883400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:18.422 [2024-12-16 22:36:07.883403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:18.422 [2024-12-16 22:36:07.883406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12793c0) on tqpair=0x121dde0 00:30:18.422 [2024-12-16 22:36:07.883412] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:30:18.422 0% 00:30:18.422 Data Units Read: 0 00:30:18.422 Data Units Written: 0 00:30:18.422 Host Read Commands: 0 00:30:18.422 Host Write Commands: 0 00:30:18.422 Controller Busy Time: 0 minutes 00:30:18.422 Power Cycles: 0 00:30:18.422 Power On Hours: 0 hours 00:30:18.422 Unsafe Shutdowns: 0 00:30:18.422 Unrecoverable Media Errors: 0 00:30:18.422 Lifetime Error Log Entries: 0 00:30:18.422 Warning Temperature Time: 0 minutes 00:30:18.422 Critical Temperature Time: 0 minutes 00:30:18.422 00:30:18.422 Number of Queues 00:30:18.422 ================ 00:30:18.422 Number of I/O Submission Queues: 127 00:30:18.422 Number of I/O Completion Queues: 127 00:30:18.422 00:30:18.422 Active Namespaces 00:30:18.422 ================= 00:30:18.422 Namespace ID:1 00:30:18.422 Error Recovery Timeout: Unlimited 00:30:18.422 Command Set Identifier: NVM (00h) 00:30:18.422 Deallocate: Supported 00:30:18.422 Deallocated/Unwritten Error: Not Supported 00:30:18.422 Deallocated Read Value: Unknown 00:30:18.422 Deallocate in Write Zeroes: Not Supported 00:30:18.422 Deallocated Guard Field: 0xFFFF 00:30:18.422 Flush: Supported 00:30:18.422 Reservation: Supported 00:30:18.422 Namespace Sharing Capabilities: Multiple Controllers 00:30:18.422 Size (in LBAs): 131072 (0GiB) 00:30:18.422 Capacity (in LBAs): 131072 (0GiB) 00:30:18.422 Utilization (in LBAs): 131072 (0GiB) 00:30:18.422 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:18.422 EUI64: ABCDEF0123456789 00:30:18.422 UUID: a255b000-8952-4e51-8ce6-684a249d4bcc 00:30:18.422 Thin Provisioning: Not Supported 00:30:18.422 Per-NS Atomic Units: Yes 00:30:18.422 Atomic Boundary Size (Normal): 0 00:30:18.422 Atomic Boundary Size (PFail): 0 00:30:18.422 Atomic Boundary Offset: 0 00:30:18.422 Maximum Single Source Range Length: 65535 00:30:18.422 Maximum Copy Length: 65535 00:30:18.422 Maximum Source Range Count: 1 00:30:18.422 NGUID/EUI64 Never Reused: No 00:30:18.422 Namespace Write Protected: No 00:30:18.422 Number of LBA Formats: 1 00:30:18.422 Current LBA Format: LBA Format #00 00:30:18.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:18.422 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.422 rmmod nvme_tcp 00:30:18.422 rmmod nvme_fabrics 00:30:18.422 rmmod nvme_keyring 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 450031 ']' 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 450031 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 450031 ']' 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 450031 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.422 22:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 450031 00:30:18.422 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.422 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.422 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 450031' 00:30:18.422 killing process with pid 450031 00:30:18.422 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 450031 00:30:18.422 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 450031 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.681 22:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.585 22:36:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:20.585 00:30:20.585 real 0m9.237s 00:30:20.585 user 0m5.448s 00:30:20.585 sys 0m4.822s 00:30:20.585 22:36:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.585 22:36:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.585 ************************************ 00:30:20.585 END TEST nvmf_identify 00:30:20.585 ************************************ 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.844 ************************************ 00:30:20.844 START TEST nvmf_perf 00:30:20.844 ************************************ 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:20.844 * Looking for test storage... 00:30:20.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.844 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:20.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.845 --rc genhtml_branch_coverage=1 00:30:20.845 --rc genhtml_function_coverage=1 00:30:20.845 --rc genhtml_legend=1 00:30:20.845 --rc geninfo_all_blocks=1 00:30:20.845 --rc geninfo_unexecuted_blocks=1 00:30:20.845 00:30:20.845 ' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:20.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.845 --rc genhtml_branch_coverage=1 00:30:20.845 --rc genhtml_function_coverage=1 00:30:20.845 --rc genhtml_legend=1 00:30:20.845 --rc geninfo_all_blocks=1 00:30:20.845 --rc geninfo_unexecuted_blocks=1 00:30:20.845 00:30:20.845 ' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:20.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.845 --rc genhtml_branch_coverage=1 00:30:20.845 --rc genhtml_function_coverage=1 00:30:20.845 --rc genhtml_legend=1 00:30:20.845 --rc geninfo_all_blocks=1 00:30:20.845 --rc geninfo_unexecuted_blocks=1 00:30:20.845 00:30:20.845 ' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:20.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.845 --rc genhtml_branch_coverage=1 00:30:20.845 --rc genhtml_function_coverage=1 00:30:20.845 --rc genhtml_legend=1 00:30:20.845 --rc geninfo_all_blocks=1 00:30:20.845 --rc geninfo_unexecuted_blocks=1 00:30:20.845 00:30:20.845 ' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:20.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.845 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.104 22:36:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:27.673 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:27.673 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:27.673 Found net devices under 0000:af:00.0: cvl_0_0 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:27.673 Found net devices under 0000:af:00.1: cvl_0_1 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:27.673 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:27.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:30:27.674 00:30:27.674 --- 10.0.0.2 ping statistics --- 00:30:27.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.674 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:27.674 00:30:27.674 --- 10.0.0.1 ping statistics --- 00:30:27.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.674 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=453814 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 453814 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 453814 ']' 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.674 [2024-12-16 22:36:16.474900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:27.674 [2024-12-16 22:36:16.474949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.674 [2024-12-16 22:36:16.551358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.674 [2024-12-16 22:36:16.574856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.674 [2024-12-16 22:36:16.574893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.674 [2024-12-16 22:36:16.574900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.674 [2024-12-16 22:36:16.574905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.674 [2024-12-16 22:36:16.574910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.674 [2024-12-16 22:36:16.576262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.674 [2024-12-16 22:36:16.576369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.674 [2024-12-16 22:36:16.576478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.674 [2024-12-16 22:36:16.576479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:27.674 22:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:30.207 22:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:30.207 22:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:30.466 22:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:30.466 22:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:30.466 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:30.466 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:30.466 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:30.466 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:30.466 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:30.724 [2024-12-16 22:36:20.346409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.724 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:30.983 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:30.983 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.241 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:31.241 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:31.500 22:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.500 [2024-12-16 22:36:21.154562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.500 22:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.759 22:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:31.759 22:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:31.759 22:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:31.759 22:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:33.136 Initializing NVMe Controllers 00:30:33.136 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:33.136 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:33.136 Initialization complete. Launching workers. 00:30:33.136 ======================================================== 00:30:33.136 Latency(us) 00:30:33.136 Device Information : IOPS MiB/s Average min max 00:30:33.136 PCIE (0000:5e:00.0) NSID 1 from core 0: 98064.17 383.06 325.78 34.07 4505.72 00:30:33.136 ======================================================== 00:30:33.136 Total : 98064.17 383.06 325.78 34.07 4505.72 00:30:33.136 00:30:33.136 22:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.513 Initializing NVMe Controllers 00:30:34.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:34.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:34.513 Initialization complete. Launching workers. 00:30:34.513 ======================================================== 00:30:34.513 Latency(us) 00:30:34.513 Device Information : IOPS MiB/s Average min max 00:30:34.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 115.00 0.45 8964.48 103.64 45123.83 00:30:34.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36.00 0.14 27882.26 7961.12 47904.51 00:30:34.513 ======================================================== 00:30:34.513 Total : 151.00 0.59 13474.68 103.64 47904.51 00:30:34.513 00:30:34.513 22:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.449 Initializing NVMe Controllers 00:30:35.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:35.449 Initialization complete. Launching workers. 00:30:35.449 ======================================================== 00:30:35.449 Latency(us) 00:30:35.449 Device Information : IOPS MiB/s Average min max 00:30:35.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11352.48 44.35 2825.13 500.07 42104.39 00:30:35.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3849.58 15.04 8380.74 5381.56 47807.24 00:30:35.449 ======================================================== 00:30:35.449 Total : 15202.07 59.38 4231.96 500.07 47807.24 00:30:35.449 00:30:35.449 22:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:35.449 22:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:35.449 22:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.982 Initializing NVMe Controllers 00:30:37.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.982 Controller IO queue size 128, less than required. 00:30:37.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:37.982 Controller IO queue size 128, less than required. 00:30:37.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:37.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:37.982 Initialization complete. Launching workers. 00:30:37.982 ======================================================== 00:30:37.982 Latency(us) 00:30:37.982 Device Information : IOPS MiB/s Average min max 00:30:37.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1848.49 462.12 70349.03 47994.39 122767.28 00:30:37.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 595.85 148.96 221589.28 79972.06 325650.02 00:30:37.982 ======================================================== 00:30:37.982 Total : 2444.34 611.08 107216.54 47994.39 325650.02 00:30:37.982 00:30:37.982 22:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:38.240 No valid NVMe controllers or AIO or URING devices found 00:30:38.240 Initializing NVMe Controllers 00:30:38.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.240 Controller IO queue size 128, less than required. 00:30:38.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:38.240 Controller IO queue size 128, less than required. 00:30:38.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:38.240 WARNING: Some requested NVMe devices were skipped 00:30:38.240 22:36:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:40.775 Initializing NVMe Controllers 00:30:40.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.775 Controller IO queue size 128, less than required. 00:30:40.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:40.775 Controller IO queue size 128, less than required. 00:30:40.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:40.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:40.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:40.775 Initialization complete. Launching workers. 00:30:40.775 00:30:40.775 ==================== 00:30:40.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:40.775 TCP transport: 00:30:40.775 polls: 12891 00:30:40.775 idle_polls: 9593 00:30:40.775 sock_completions: 3298 00:30:40.775 nvme_completions: 6233 00:30:40.775 submitted_requests: 9428 00:30:40.775 queued_requests: 1 00:30:40.775 00:30:40.775 ==================== 00:30:40.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:40.775 TCP transport: 00:30:40.775 polls: 12958 00:30:40.775 idle_polls: 8906 00:30:40.775 sock_completions: 4052 00:30:40.775 nvme_completions: 6803 00:30:40.775 submitted_requests: 10252 00:30:40.775 queued_requests: 1 00:30:40.775 ======================================================== 00:30:40.775 Latency(us) 00:30:40.775 Device Information : IOPS MiB/s Average min max 00:30:40.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1555.40 388.85 84474.73 67104.26 138483.65 00:30:40.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1697.66 424.42 75439.33 41253.40 111049.65 00:30:40.775 ======================================================== 00:30:40.775 Total : 3253.06 813.27 79759.47 41253.40 138483.65 00:30:40.775 00:30:40.775 22:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:40.775 22:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.034 22:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:41.034 22:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:41.034 22:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=422e3c28-49bd-47dc-a2ac-25a0b5b2341e 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 422e3c28-49bd-47dc-a2ac-25a0b5b2341e 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=422e3c28-49bd-47dc-a2ac-25a0b5b2341e 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:44.321 22:36:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:44.579 { 00:30:44.579 "uuid": "422e3c28-49bd-47dc-a2ac-25a0b5b2341e", 00:30:44.579 "name": "lvs_0", 00:30:44.579 "base_bdev": "Nvme0n1", 00:30:44.579 "total_data_clusters": 238234, 00:30:44.579 "free_clusters": 238234, 00:30:44.579 "block_size": 512, 00:30:44.579 "cluster_size": 4194304 00:30:44.579 } 00:30:44.579 ]' 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="422e3c28-49bd-47dc-a2ac-25a0b5b2341e") .free_clusters' 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="422e3c28-49bd-47dc-a2ac-25a0b5b2341e") .cluster_size' 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:44.579 952936 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:44.579 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 422e3c28-49bd-47dc-a2ac-25a0b5b2341e lbd_0 20480 00:30:44.837 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a386ceb4-2c43-4352-b5fa-549d28415a5e 00:30:44.837 22:36:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a386ceb4-2c43-4352-b5fa-549d28415a5e lvs_n_0 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=ab21a752-3d32-41ba-9188-41f0dd5b844e 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb ab21a752-3d32-41ba-9188-41f0dd5b844e 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=ab21a752-3d32-41ba-9188-41f0dd5b844e 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:45.773 { 00:30:45.773 "uuid": "422e3c28-49bd-47dc-a2ac-25a0b5b2341e", 00:30:45.773 "name": "lvs_0", 00:30:45.773 "base_bdev": "Nvme0n1", 00:30:45.773 "total_data_clusters": 238234, 00:30:45.773 "free_clusters": 233114, 00:30:45.773 "block_size": 512, 00:30:45.773 "cluster_size": 4194304 00:30:45.773 }, 00:30:45.773 { 00:30:45.773 "uuid": "ab21a752-3d32-41ba-9188-41f0dd5b844e", 00:30:45.773 "name": "lvs_n_0", 00:30:45.773 "base_bdev": "a386ceb4-2c43-4352-b5fa-549d28415a5e", 00:30:45.773 "total_data_clusters": 5114, 00:30:45.773 "free_clusters": 5114, 00:30:45.773 "block_size": 512, 00:30:45.773 "cluster_size": 4194304 00:30:45.773 } 00:30:45.773 ]' 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ab21a752-3d32-41ba-9188-41f0dd5b844e") .free_clusters' 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ab21a752-3d32-41ba-9188-41f0dd5b844e") .cluster_size' 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:45.773 20456 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:45.773 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab21a752-3d32-41ba-9188-41f0dd5b844e lbd_nest_0 20456 00:30:46.031 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=075c81bf-dc0f-4252-9f38-b01971b2f917 00:30:46.031 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.290 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:46.290 22:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 075c81bf-dc0f-4252-9f38-b01971b2f917 00:30:46.549 22:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.807 22:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:46.807 22:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:46.807 22:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:46.807 22:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.807 22:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.010 Initializing NVMe Controllers 00:30:59.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.010 Initialization complete. Launching workers. 00:30:59.010 ======================================================== 00:30:59.010 Latency(us) 00:30:59.010 Device Information : IOPS MiB/s Average min max 00:30:59.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.78 0.02 22383.76 125.14 45708.55 00:30:59.010 ======================================================== 00:30:59.010 Total : 44.78 0.02 22383.76 125.14 45708.55 00:30:59.010 00:30:59.010 22:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:59.010 22:36:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.988 Initializing NVMe Controllers 00:31:08.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.988 Initialization complete. Launching workers. 00:31:08.988 ======================================================== 00:31:08.988 Latency(us) 00:31:08.988 Device Information : IOPS MiB/s Average min max 00:31:08.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.88 8.48 14743.78 4985.21 55865.89 00:31:08.988 ======================================================== 00:31:08.988 Total : 67.88 8.48 14743.78 4985.21 55865.89 00:31:08.988 00:31:08.988 22:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:08.988 22:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.988 22:36:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.964 Initializing NVMe Controllers 00:31:18.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.964 Initialization complete. Launching workers. 00:31:18.964 ======================================================== 00:31:18.964 Latency(us) 00:31:18.964 Device Information : IOPS MiB/s Average min max 00:31:18.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8589.90 4.19 3726.72 227.76 9990.36 00:31:18.964 ======================================================== 00:31:18.964 Total : 8589.90 4.19 3726.72 227.76 9990.36 00:31:18.964 00:31:18.964 22:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:18.964 22:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.949 Initializing NVMe Controllers 00:31:28.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.949 Initialization complete. Launching workers. 00:31:28.949 ======================================================== 00:31:28.949 Latency(us) 00:31:28.949 Device Information : IOPS MiB/s Average min max 00:31:28.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4407.55 550.94 7262.01 662.22 18959.94 00:31:28.949 ======================================================== 00:31:28.949 Total : 4407.55 550.94 7262.01 662.22 18959.94 00:31:28.949 00:31:28.949 22:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:28.949 22:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:28.949 22:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.923 Initializing NVMe Controllers 00:31:38.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.923 Controller IO queue size 128, less than required. 00:31:38.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:38.923 Initialization complete. Launching workers. 00:31:38.923 ======================================================== 00:31:38.923 Latency(us) 00:31:38.923 Device Information : IOPS MiB/s Average min max 00:31:38.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15891.24 7.76 8057.08 1454.61 22621.62 00:31:38.923 ======================================================== 00:31:38.923 Total : 15891.24 7.76 8057.08 1454.61 22621.62 00:31:38.923 00:31:38.923 22:37:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:38.923 22:37:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.896 Initializing NVMe Controllers 00:31:48.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.896 Controller IO queue size 128, less than required. 00:31:48.896 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:48.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:48.896 Initialization complete. Launching workers. 00:31:48.896 ======================================================== 00:31:48.896 Latency(us) 00:31:48.896 Device Information : IOPS MiB/s Average min max 00:31:48.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.60 150.07 106879.78 8540.50 215534.89 00:31:48.896 ======================================================== 00:31:48.896 Total : 1200.60 150.07 106879.78 8540.50 215534.89 00:31:48.896 00:31:48.896 22:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.155 22:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 075c81bf-dc0f-4252-9f38-b01971b2f917 00:31:50.090 22:37:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:50.090 22:37:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a386ceb4-2c43-4352-b5fa-549d28415a5e 00:31:50.348 22:37:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.607 rmmod nvme_tcp 00:31:50.607 rmmod nvme_fabrics 00:31:50.607 rmmod nvme_keyring 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 453814 ']' 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 453814 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 453814 ']' 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 453814 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453814 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453814' 00:31:50.607 killing process with pid 453814 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 453814 00:31:50.607 22:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 453814 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.510 22:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.415 00:31:54.415 real 1m33.460s 00:31:54.415 user 5m33.111s 00:31:54.415 sys 0m17.271s 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.415 ************************************ 00:31:54.415 END TEST nvmf_perf 00:31:54.415 ************************************ 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.415 ************************************ 00:31:54.415 START TEST nvmf_fio_host 00:31:54.415 ************************************ 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:54.415 * Looking for test storage... 00:31:54.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:54.415 22:37:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:54.415 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.416 --rc genhtml_branch_coverage=1 00:31:54.416 --rc genhtml_function_coverage=1 00:31:54.416 --rc genhtml_legend=1 00:31:54.416 --rc geninfo_all_blocks=1 00:31:54.416 --rc geninfo_unexecuted_blocks=1 00:31:54.416 00:31:54.416 ' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.416 --rc genhtml_branch_coverage=1 00:31:54.416 --rc genhtml_function_coverage=1 00:31:54.416 --rc genhtml_legend=1 00:31:54.416 --rc geninfo_all_blocks=1 00:31:54.416 --rc geninfo_unexecuted_blocks=1 00:31:54.416 00:31:54.416 ' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.416 --rc genhtml_branch_coverage=1 00:31:54.416 --rc genhtml_function_coverage=1 00:31:54.416 --rc genhtml_legend=1 00:31:54.416 --rc geninfo_all_blocks=1 00:31:54.416 --rc geninfo_unexecuted_blocks=1 00:31:54.416 00:31:54.416 ' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:54.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.416 --rc genhtml_branch_coverage=1 00:31:54.416 --rc genhtml_function_coverage=1 00:31:54.416 --rc genhtml_legend=1 00:31:54.416 --rc geninfo_all_blocks=1 00:31:54.416 --rc geninfo_unexecuted_blocks=1 00:31:54.416 00:31:54.416 ' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:54.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:54.416 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.417 22:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:00.986 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:00.986 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:00.986 Found net devices under 0000:af:00.0: cvl_0_0 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:00.986 Found net devices under 0000:af:00.1: cvl_0_1 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.986 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.987 22:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:32:00.987 00:32:00.987 --- 10.0.0.2 ping statistics --- 00:32:00.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.987 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:32:00.987 00:32:00.987 --- 10.0.0.1 ping statistics --- 00:32:00.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.987 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=470642 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 470642 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 470642 ']' 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.987 [2024-12-16 22:37:50.119068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:00.987 [2024-12-16 22:37:50.119117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.987 [2024-12-16 22:37:50.196177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.987 [2024-12-16 22:37:50.219302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.987 [2024-12-16 22:37:50.219342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.987 [2024-12-16 22:37:50.219349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.987 [2024-12-16 22:37:50.219355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.987 [2024-12-16 22:37:50.219364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.987 [2024-12-16 22:37:50.220666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.987 [2024-12-16 22:37:50.220773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.987 [2024-12-16 22:37:50.220880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.987 [2024-12-16 22:37:50.220882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.987 [2024-12-16 22:37:50.472856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.987 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:01.246 Malloc1 00:32:01.246 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:01.504 22:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:01.504 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.762 [2024-12-16 22:37:51.327734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.762 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:02.021 22:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:02.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:02.279 fio-3.35 00:32:02.279 Starting 1 thread 00:32:04.812 00:32:04.812 test: (groupid=0, jobs=1): err= 0: pid=471083: Mon Dec 16 22:37:54 2024 00:32:04.812 read: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.7MiB/2005msec) 00:32:04.812 slat (nsec): min=1525, max=238750, avg=1684.59, stdev=2216.18 00:32:04.812 clat (usec): min=3083, max=10192, avg=5967.61, stdev=446.54 00:32:04.812 lat (usec): min=3121, max=10193, avg=5969.30, stdev=446.43 00:32:04.812 clat percentiles (usec): 00:32:04.812 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:32:04.812 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:32:04.812 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:32:04.812 | 99.00th=[ 6980], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[ 9372], 00:32:04.812 | 99.99th=[10159] 00:32:04.812 bw ( KiB/s): min=46144, max=47992, per=99.95%, avg=47342.00, stdev=846.02, samples=4 00:32:04.812 iops : min=11536, max=11998, avg=11835.50, stdev=211.50, samples=4 00:32:04.812 write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(92.3MiB/2005msec); 0 zone resets 00:32:04.812 slat (nsec): min=1571, max=227396, avg=1750.09, stdev=1637.80 00:32:04.812 clat (usec): min=2426, max=9203, avg=4813.09, stdev=365.05 00:32:04.812 lat (usec): min=2441, max=9205, avg=4814.84, stdev=365.04 00:32:04.812 clat percentiles (usec): 00:32:04.812 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:32:04.812 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:32:04.812 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5342], 00:32:04.812 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7504], 99.95th=[ 8225], 00:32:04.812 | 99.99th=[ 8848] 00:32:04.812 bw ( KiB/s): min=46656, max=47776, per=100.00%, avg=47152.00, stdev=484.95, samples=4 00:32:04.812 iops : min=11664, max=11944, avg=11788.00, stdev=121.24, samples=4 00:32:04.812 lat (msec) : 4=0.58%, 10=99.40%, 20=0.01% 00:32:04.812 cpu : usr=72.90%, sys=26.10%, ctx=54, majf=0, minf=3 00:32:04.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:04.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:04.812 issued rwts: total=23742,23632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:04.812 00:32:04.812 Run status group 0 (all jobs): 00:32:04.812 READ: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:32:04.812 WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=92.3MiB (96.8MB), run=2005-2005msec 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:04.812 22:37:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:05.070 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:05.070 fio-3.35 00:32:05.070 Starting 1 thread 00:32:05.636 [2024-12-16 22:37:55.312010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499da0 is same with the state(6) to be set 00:32:05.636 [2024-12-16 22:37:55.312070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499da0 is same with the state(6) to be set 00:32:05.636 [2024-12-16 22:37:55.312078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499da0 is same with the state(6) to be set 00:32:07.538 00:32:07.538 test: (groupid=0, jobs=1): err= 0: pid=471641: Mon Dec 16 22:37:56 2024 00:32:07.538 read: IOPS=11.1k, BW=173MiB/s (181MB/s)(347MiB/2006msec) 00:32:07.538 slat (nsec): min=2511, max=81816, avg=2813.44, stdev=1216.20 00:32:07.538 clat (usec): min=1591, max=13597, avg=6667.30, stdev=1611.30 00:32:07.538 lat (usec): min=1594, max=13600, avg=6670.11, stdev=1611.39 00:32:07.538 clat percentiles (usec): 00:32:07.538 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:32:07.538 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 7046], 00:32:07.538 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8717], 95.00th=[ 9634], 00:32:07.538 | 99.00th=[11469], 99.50th=[12256], 99.90th=[13042], 99.95th=[13173], 00:32:07.538 | 99.99th=[13566] 00:32:07.538 bw ( KiB/s): min=83744, max=95360, per=50.21%, avg=88912.00, stdev=5458.32, samples=4 00:32:07.538 iops : min= 5234, max= 5960, avg=5557.00, stdev=341.15, samples=4 00:32:07.538 write: IOPS=6355, BW=99.3MiB/s (104MB/s)(181MiB/1826msec); 0 zone resets 00:32:07.538 slat (usec): min=29, max=386, avg=31.31, stdev= 7.05 00:32:07.538 clat (usec): min=3171, max=14515, avg=8602.81, stdev=1512.97 00:32:07.538 lat (usec): min=3200, max=14545, avg=8634.12, stdev=1514.13 00:32:07.538 clat percentiles (usec): 00:32:07.538 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7373], 00:32:07.538 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:32:07.538 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:32:07.538 | 99.00th=[12780], 99.50th=[13435], 99.90th=[14091], 99.95th=[14222], 00:32:07.538 | 99.99th=[14484] 00:32:07.538 bw ( KiB/s): min=87424, max=99200, per=90.82%, avg=92360.00, stdev=5836.90, samples=4 00:32:07.538 iops : min= 5464, max= 6200, avg=5772.50, stdev=364.81, samples=4 00:32:07.538 lat (msec) : 2=0.05%, 4=1.79%, 10=90.00%, 20=8.16% 00:32:07.538 cpu : usr=85.59%, sys=13.77%, ctx=24, majf=0, minf=3 00:32:07.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:07.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:07.538 issued rwts: total=22203,11606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.538 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:07.538 00:32:07.538 Run status group 0 (all jobs): 00:32:07.539 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=347MiB (364MB), run=2006-2006msec 00:32:07.539 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=181MiB (190MB), run=1826-1826msec 00:32:07.539 22:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:07.539 22:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:32:10.824 Nvme0n1 00:32:10.824 22:38:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:13.355 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7b8fb8e1-5361-4c89-8571-73cdf2aff237 00:32:13.355 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7b8fb8e1-5361-4c89-8571-73cdf2aff237 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=7b8fb8e1-5361-4c89-8571-73cdf2aff237 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:13.614 { 00:32:13.614 "uuid": "7b8fb8e1-5361-4c89-8571-73cdf2aff237", 00:32:13.614 "name": "lvs_0", 00:32:13.614 "base_bdev": "Nvme0n1", 00:32:13.614 "total_data_clusters": 930, 00:32:13.614 "free_clusters": 930, 00:32:13.614 "block_size": 512, 00:32:13.614 "cluster_size": 1073741824 00:32:13.614 } 00:32:13.614 ]' 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="7b8fb8e1-5361-4c89-8571-73cdf2aff237") .free_clusters' 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:13.614 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="7b8fb8e1-5361-4c89-8571-73cdf2aff237") .cluster_size' 00:32:13.873 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:13.873 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:13.873 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:13.873 952320 00:32:13.873 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:14.131 327bdbb9-26d1-4bae-a7da-dd6d4c0cc389 00:32:14.131 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:14.390 22:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:14.649 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.906 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.906 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.906 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.906 22:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:15.164 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:15.164 fio-3.35 00:32:15.164 Starting 1 thread 00:32:17.698 00:32:17.698 test: (groupid=0, jobs=1): err= 0: pid=473341: Mon Dec 16 22:38:07 2024 00:32:17.698 read: IOPS=7899, BW=30.9MiB/s (32.4MB/s)(63.2MiB/2048msec) 00:32:17.698 slat (nsec): min=1525, max=87599, avg=1651.24, stdev=987.31 00:32:17.698 clat (usec): min=804, max=169843, avg=8863.16, stdev=10573.36 00:32:17.698 lat (usec): min=806, max=169861, avg=8864.81, stdev=10573.49 00:32:17.698 clat percentiles (msec): 00:32:17.698 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:17.698 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:32:17.698 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:32:17.698 | 99.00th=[ 10], 99.50th=[ 55], 99.90th=[ 169], 99.95th=[ 169], 00:32:17.698 | 99.99th=[ 169] 00:32:17.698 bw ( KiB/s): min=22906, max=35488, per=100.00%, avg=32230.50, stdev=6217.45, samples=4 00:32:17.698 iops : min= 5726, max= 8872, avg=8057.50, stdev=1554.61, samples=4 00:32:17.698 write: IOPS=7884, BW=30.8MiB/s (32.3MB/s)(63.1MiB/2048msec); 0 zone resets 00:32:17.698 slat (nsec): min=1575, max=72000, avg=1706.30, stdev=621.03 00:32:17.698 clat (usec): min=298, max=168479, avg=7258.62, stdev=10008.04 00:32:17.698 lat (usec): min=300, max=168483, avg=7260.32, stdev=10008.19 00:32:17.698 clat percentiles (msec): 00:32:17.698 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:32:17.698 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:17.698 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:32:17.698 | 99.00th=[ 8], 99.50th=[ 52], 99.90th=[ 169], 99.95th=[ 169], 00:32:17.698 | 99.99th=[ 169] 00:32:17.698 bw ( KiB/s): min=23864, max=34952, per=100.00%, avg=32144.00, stdev=5520.35, samples=4 00:32:17.698 iops : min= 5966, max= 8738, avg=8036.00, stdev=1380.09, samples=4 00:32:17.698 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:17.698 lat (msec) : 2=0.05%, 4=0.24%, 10=98.90%, 20=0.01%, 50=0.17% 00:32:17.698 lat (msec) : 100=0.23%, 250=0.40% 00:32:17.698 cpu : usr=71.42%, sys=27.85%, ctx=110, majf=0, minf=3 00:32:17.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:17.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.698 issued rwts: total=16179,16147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.699 00:32:17.699 Run status group 0 (all jobs): 00:32:17.699 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=63.2MiB (66.3MB), run=2048-2048msec 00:32:17.699 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=63.1MiB (66.1MB), run=2048-2048msec 00:32:17.699 22:38:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:17.699 22:38:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=69599576-7fab-4bb6-8693-7baf3d24e44f 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 69599576-7fab-4bb6-8693-7baf3d24e44f 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=69599576-7fab-4bb6-8693-7baf3d24e44f 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:18.634 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:18.893 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:18.893 { 00:32:18.893 "uuid": "7b8fb8e1-5361-4c89-8571-73cdf2aff237", 00:32:18.893 "name": "lvs_0", 00:32:18.893 "base_bdev": "Nvme0n1", 00:32:18.893 "total_data_clusters": 930, 00:32:18.893 "free_clusters": 0, 00:32:18.893 "block_size": 512, 00:32:18.893 "cluster_size": 1073741824 00:32:18.893 }, 00:32:18.893 { 00:32:18.893 "uuid": "69599576-7fab-4bb6-8693-7baf3d24e44f", 00:32:18.893 "name": "lvs_n_0", 00:32:18.893 "base_bdev": "327bdbb9-26d1-4bae-a7da-dd6d4c0cc389", 00:32:18.893 "total_data_clusters": 237847, 00:32:18.893 "free_clusters": 237847, 00:32:18.893 "block_size": 512, 00:32:18.893 "cluster_size": 4194304 00:32:18.893 } 00:32:18.893 ]' 00:32:18.893 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="69599576-7fab-4bb6-8693-7baf3d24e44f") .free_clusters' 00:32:18.893 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:18.893 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="69599576-7fab-4bb6-8693-7baf3d24e44f") .cluster_size' 00:32:19.152 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:19.152 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:19.152 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:19.152 951388 00:32:19.152 22:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:19.719 1d1f74c9-6a96-4175-ac67-a15303754309 00:32:19.719 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:19.719 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:19.977 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:20.236 22:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:20.495 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:20.495 fio-3.35 00:32:20.495 Starting 1 thread 00:32:23.030 00:32:23.030 test: (groupid=0, jobs=1): err= 0: pid=474361: Mon Dec 16 22:38:12 2024 00:32:23.030 read: IOPS=7837, BW=30.6MiB/s (32.1MB/s)(61.4MiB/2006msec) 00:32:23.030 slat (nsec): min=1506, max=124717, avg=1652.11, stdev=1402.08 00:32:23.030 clat (usec): min=3240, max=14828, avg=9010.67, stdev=785.16 00:32:23.030 lat (usec): min=3245, max=14829, avg=9012.33, stdev=785.06 00:32:23.030 clat percentiles (usec): 00:32:23.030 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:32:23.030 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:32:23.030 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10290], 00:32:23.030 | 99.00th=[10683], 99.50th=[10945], 99.90th=[13829], 99.95th=[14484], 00:32:23.030 | 99.99th=[14746] 00:32:23.030 bw ( KiB/s): min=30272, max=31736, per=99.78%, avg=31282.00, stdev=692.15, samples=4 00:32:23.030 iops : min= 7568, max= 7934, avg=7820.50, stdev=173.04, samples=4 00:32:23.030 write: IOPS=7808, BW=30.5MiB/s (32.0MB/s)(61.2MiB/2006msec); 0 zone resets 00:32:23.030 slat (nsec): min=1555, max=75857, avg=1699.44, stdev=686.67 00:32:23.030 clat (usec): min=1560, max=12685, avg=7286.03, stdev=639.54 00:32:23.030 lat (usec): min=1566, max=12687, avg=7287.73, stdev=639.49 00:32:23.030 clat percentiles (usec): 00:32:23.030 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:32:23.030 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:32:23.030 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:32:23.030 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10683], 99.95th=[11863], 00:32:23.030 | 99.99th=[12649] 00:32:23.030 bw ( KiB/s): min=31040, max=31384, per=100.00%, avg=31238.00, stdev=145.97, samples=4 00:32:23.030 iops : min= 7760, max= 7846, avg=7809.50, stdev=36.49, samples=4 00:32:23.030 lat (msec) : 2=0.01%, 4=0.11%, 10=95.58%, 20=4.29% 00:32:23.030 cpu : usr=71.27%, sys=27.88%, ctx=139, majf=0, minf=3 00:32:23.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:23.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:23.030 issued rwts: total=15723,15664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:23.030 00:32:23.030 Run status group 0 (all jobs): 00:32:23.030 READ: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.4MiB (64.4MB), run=2006-2006msec 00:32:23.030 WRITE: bw=30.5MiB/s (32.0MB/s), 30.5MiB/s-30.5MiB/s (32.0MB/s-32.0MB/s), io=61.2MiB (64.2MB), run=2006-2006msec 00:32:23.030 22:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:23.289 22:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:23.289 22:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:27.478 22:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:27.478 22:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:30.011 22:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:30.269 22:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.173 rmmod nvme_tcp 00:32:32.173 rmmod nvme_fabrics 00:32:32.173 rmmod nvme_keyring 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 470642 ']' 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 470642 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 470642 ']' 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 470642 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470642 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470642' 00:32:32.173 killing process with pid 470642 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 470642 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 470642 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.173 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.174 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.174 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.174 22:38:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.709 00:32:34.709 real 0m40.062s 00:32:34.709 user 2m39.736s 00:32:34.709 sys 0m9.028s 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.709 ************************************ 00:32:34.709 END TEST nvmf_fio_host 00:32:34.709 ************************************ 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.709 22:38:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.709 ************************************ 00:32:34.709 START TEST nvmf_failover 00:32:34.709 ************************************ 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:34.709 * Looking for test storage... 00:32:34.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.709 --rc genhtml_branch_coverage=1 00:32:34.709 --rc genhtml_function_coverage=1 00:32:34.709 --rc genhtml_legend=1 00:32:34.709 --rc geninfo_all_blocks=1 00:32:34.709 --rc geninfo_unexecuted_blocks=1 00:32:34.709 00:32:34.709 ' 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.709 --rc genhtml_branch_coverage=1 00:32:34.709 --rc genhtml_function_coverage=1 00:32:34.709 --rc genhtml_legend=1 00:32:34.709 --rc geninfo_all_blocks=1 00:32:34.709 --rc geninfo_unexecuted_blocks=1 00:32:34.709 00:32:34.709 ' 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.709 --rc genhtml_branch_coverage=1 00:32:34.709 --rc genhtml_function_coverage=1 00:32:34.709 --rc genhtml_legend=1 00:32:34.709 --rc geninfo_all_blocks=1 00:32:34.709 --rc geninfo_unexecuted_blocks=1 00:32:34.709 00:32:34.709 ' 00:32:34.709 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:34.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.709 --rc genhtml_branch_coverage=1 00:32:34.709 --rc genhtml_function_coverage=1 00:32:34.709 --rc genhtml_legend=1 00:32:34.709 --rc geninfo_all_blocks=1 00:32:34.710 --rc geninfo_unexecuted_blocks=1 00:32:34.710 00:32:34.710 ' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:34.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.710 22:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:41.279 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:41.280 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:41.280 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:41.280 Found net devices under 0000:af:00.0: cvl_0_0 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:41.280 Found net devices under 0000:af:00.1: cvl_0_1 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:41.280 22:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:41.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:32:41.280 00:32:41.280 --- 10.0.0.2 ping statistics --- 00:32:41.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.280 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:32:41.280 00:32:41.280 --- 10.0.0.1 ping statistics --- 00:32:41.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.280 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=479596 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 479596 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479596 ']' 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.280 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:41.280 [2024-12-16 22:38:30.152037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:41.280 [2024-12-16 22:38:30.152083] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.280 [2024-12-16 22:38:30.227955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:41.280 [2024-12-16 22:38:30.249832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:41.280 [2024-12-16 22:38:30.249869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:41.280 [2024-12-16 22:38:30.249876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:41.280 [2024-12-16 22:38:30.249882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:41.280 [2024-12-16 22:38:30.249887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:41.280 [2024-12-16 22:38:30.251210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:41.280 [2024-12-16 22:38:30.251283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.281 [2024-12-16 22:38:30.251284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:41.281 [2024-12-16 22:38:30.558610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:41.281 Malloc0 00:32:41.281 22:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:41.539 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:41.539 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.797 [2024-12-16 22:38:31.379022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.797 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:42.055 [2024-12-16 22:38:31.575582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:42.055 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:42.314 [2024-12-16 22:38:31.772228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=479848 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 479848 /var/tmp/bdevperf.sock 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479848 ']' 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:42.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.314 22:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:42.572 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.572 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:42.572 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:42.572 NVMe0n1 00:32:42.831 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:42.831 00:32:43.089 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=479966 00:32:43.089 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:43.089 22:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:44.026 22:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:44.286 [2024-12-16 22:38:33.729576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 [2024-12-16 22:38:33.729872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5aa0 is same with the state(6) to be set 00:32:44.286 22:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:47.577 22:38:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.577 00:32:47.577 22:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:47.577 [2024-12-16 22:38:37.243128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.577 [2024-12-16 22:38:37.243178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 [2024-12-16 22:38:37.243346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6fe0 is same with the state(6) to be set 00:32:47.578 22:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:51.006 22:38:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.006 [2024-12-16 22:38:40.457751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.006 22:38:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:52.062 22:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:52.062 [2024-12-16 22:38:41.669206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 [2024-12-16 22:38:41.669393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7ea0 is same with the state(6) to be set 00:32:52.062 22:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 479966 00:32:58.933 { 00:32:58.933 "results": [ 00:32:58.933 { 00:32:58.933 "job": "NVMe0n1", 00:32:58.933 "core_mask": "0x1", 00:32:58.933 "workload": "verify", 00:32:58.933 "status": "finished", 00:32:58.933 "verify_range": { 00:32:58.933 "start": 0, 00:32:58.933 "length": 16384 00:32:58.933 }, 00:32:58.933 "queue_depth": 128, 00:32:58.933 "io_size": 4096, 00:32:58.933 "runtime": 15.007154, 00:32:58.933 "iops": 11135.888923376144, 00:32:58.933 "mibps": 43.49956610693806, 00:32:58.933 "io_failed": 14429, 00:32:58.933 "io_timeout": 0, 00:32:58.933 "avg_latency_us": 10559.056437522278, 00:32:58.933 "min_latency_us": 409.6, 00:32:58.933 "max_latency_us": 21845.333333333332 00:32:58.933 } 00:32:58.933 ], 00:32:58.933 "core_count": 1 00:32:58.933 } 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 479848 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479848 ']' 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479848 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479848 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479848' 00:32:58.933 killing process with pid 479848 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479848 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479848 00:32:58.933 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:58.933 [2024-12-16 22:38:31.844925] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:58.933 [2024-12-16 22:38:31.844977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479848 ] 00:32:58.933 [2024-12-16 22:38:31.917333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.933 [2024-12-16 22:38:31.939826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.933 Running I/O for 15 seconds... 00:32:58.933 11064.00 IOPS, 43.22 MiB/s [2024-12-16T21:38:48.634Z] [2024-12-16 22:38:33.730917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.730949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.730965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.730973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.730982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.730989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.730997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.933 [2024-12-16 22:38:33.731310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.933 [2024-12-16 22:38:33.731321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.934 [2024-12-16 22:38:33.731563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.934 [2024-12-16 22:38:33.731891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.934 [2024-12-16 22:38:33.731900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.731992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.731999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.935 [2024-12-16 22:38:33.732238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.935 [2024-12-16 22:38:33.732480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.935 [2024-12-16 22:38:33.732486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.936 [2024-12-16 22:38:33.732699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.936 [2024-12-16 22:38:33.732886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.936 [2024-12-16 22:38:33.732891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:32:58.936 [2024-12-16 22:38:33.732898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732941] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:58.936 [2024-12-16 22:38:33.732962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.936 [2024-12-16 22:38:33.732970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.936 [2024-12-16 22:38:33.732984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.732991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.936 [2024-12-16 22:38:33.732997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.733003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.936 [2024-12-16 22:38:33.733010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:33.733018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:58.936 [2024-12-16 22:38:33.733045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d63a0 (9): Bad file descriptor 00:32:58.936 [2024-12-16 22:38:33.735798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:58.936 [2024-12-16 22:38:33.886257] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:58.936 10337.50 IOPS, 40.38 MiB/s [2024-12-16T21:38:48.637Z] 10714.33 IOPS, 41.85 MiB/s [2024-12-16T21:38:48.637Z] 10877.25 IOPS, 42.49 MiB/s [2024-12-16T21:38:48.637Z] [2024-12-16 22:38:37.244983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.936 [2024-12-16 22:38:37.245023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:37.245037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.936 [2024-12-16 22:38:37.245044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.936 [2024-12-16 22:38:37.245053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.936 [2024-12-16 22:38:37.245060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.937 [2024-12-16 22:38:37.245472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.937 [2024-12-16 22:38:37.245487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.937 [2024-12-16 22:38:37.245502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.937 [2024-12-16 22:38:37.245517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.937 [2024-12-16 22:38:37.245525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.937 [2024-12-16 22:38:37.245532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.245989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.245997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.938 [2024-12-16 22:38:37.246089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.938 [2024-12-16 22:38:37.246096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.939 [2024-12-16 22:38:37.246403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68224 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68232 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68240 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68248 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68256 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68264 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68272 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68280 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68288 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68296 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.939 [2024-12-16 22:38:37.246677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.939 [2024-12-16 22:38:37.246682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68304 len:8 PRP1 0x0 PRP2 0x0 00:32:58.939 [2024-12-16 22:38:37.246688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.939 [2024-12-16 22:38:37.246694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68312 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68320 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68328 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68336 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68344 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68352 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68360 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68368 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68376 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68384 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68392 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68400 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.246981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.246986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68408 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.246992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.246999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.247003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.247010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68416 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.247016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.247023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.247029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.247034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68424 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.247041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.247047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.247052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.247057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68432 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.247063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.247070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.247075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.247080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68440 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.247087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.247093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.247098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.247103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68448 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.247109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.258313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.258323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.258329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68456 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.258337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.258343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.258348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.258354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68464 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.258360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.258366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.940 [2024-12-16 22:38:37.258372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.940 [2024-12-16 22:38:37.258377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67696 len:8 PRP1 0x0 PRP2 0x0 00:32:58.940 [2024-12-16 22:38:37.258383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.258427] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:58.940 [2024-12-16 22:38:37.258449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.940 [2024-12-16 22:38:37.258458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.940 [2024-12-16 22:38:37.258467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.940 [2024-12-16 22:38:37.258474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:37.258481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.941 [2024-12-16 22:38:37.258487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:37.258494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.941 [2024-12-16 22:38:37.258500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:37.258506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:58.941 [2024-12-16 22:38:37.258538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d63a0 (9): Bad file descriptor 00:32:58.941 [2024-12-16 22:38:37.261784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:58.941 [2024-12-16 22:38:37.331038] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:58.941 10778.40 IOPS, 42.10 MiB/s [2024-12-16T21:38:48.642Z] 10885.33 IOPS, 42.52 MiB/s [2024-12-16T21:38:48.642Z] 10971.86 IOPS, 42.86 MiB/s [2024-12-16T21:38:48.642Z] 11003.25 IOPS, 42.98 MiB/s [2024-12-16T21:38:48.642Z] [2024-12-16 22:38:41.669895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.669930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.669946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.669954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.669964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.669971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.669980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.669986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.669994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.670001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.670016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.670030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.670050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.941 [2024-12-16 22:38:41.670065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.941 [2024-12-16 22:38:41.670473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.941 [2024-12-16 22:38:41.670479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.942 [2024-12-16 22:38:41.670942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.942 [2024-12-16 22:38:41.670950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.670956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.670964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.670975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.670983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.670989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.670997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:58.943 [2024-12-16 22:38:41.671483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.943 [2024-12-16 22:38:41.671512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:32:58.943 [2024-12-16 22:38:41.671518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.943 [2024-12-16 22:38:41.671533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.943 [2024-12-16 22:38:41.671538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:32:58.943 [2024-12-16 22:38:41.671546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.943 [2024-12-16 22:38:41.671552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.671945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.671952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.671957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.671964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.682284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.682298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.682305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.682313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.682321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.682330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:58.944 [2024-12-16 22:38:41.682337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:58.944 [2024-12-16 22:38:41.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:32:58.944 [2024-12-16 22:38:41.682352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.682401] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:58.944 [2024-12-16 22:38:41.682427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.944 [2024-12-16 22:38:41.682437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.682448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.944 [2024-12-16 22:38:41.682457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.682466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.944 [2024-12-16 22:38:41.682474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.944 [2024-12-16 22:38:41.682484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:58.945 [2024-12-16 22:38:41.682492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:58.945 [2024-12-16 22:38:41.682504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:58.945 [2024-12-16 22:38:41.682542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d63a0 (9): Bad file descriptor 00:32:58.945 [2024-12-16 22:38:41.688007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:58.945 11055.00 IOPS, 43.18 MiB/s [2024-12-16T21:38:48.646Z] [2024-12-16 22:38:41.763875] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:58.945 10994.60 IOPS, 42.95 MiB/s [2024-12-16T21:38:48.646Z] 11031.55 IOPS, 43.09 MiB/s [2024-12-16T21:38:48.646Z] 11072.67 IOPS, 43.25 MiB/s [2024-12-16T21:38:48.646Z] 11105.00 IOPS, 43.38 MiB/s [2024-12-16T21:38:48.646Z] 11116.14 IOPS, 43.42 MiB/s 00:32:58.945 Latency(us) 00:32:58.945 [2024-12-16T21:38:48.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.945 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:58.945 Verification LBA range: start 0x0 length 0x4000 00:32:58.945 NVMe0n1 : 15.01 11135.89 43.50 961.47 0.00 10559.06 409.60 21845.33 00:32:58.945 [2024-12-16T21:38:48.646Z] =================================================================================================================== 00:32:58.945 [2024-12-16T21:38:48.646Z] Total : 11135.89 43.50 961.47 0.00 10559.06 409.60 21845.33 00:32:58.945 Received shutdown signal, test time was about 15.000000 seconds 00:32:58.945 00:32:58.945 Latency(us) 00:32:58.945 [2024-12-16T21:38:48.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.945 [2024-12-16T21:38:48.646Z] =================================================================================================================== 00:32:58.945 [2024-12-16T21:38:48.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=482334 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 482334 /var/tmp/bdevperf.sock 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 482334 ']' 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:58.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.945 22:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:58.945 22:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.945 22:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:58.945 22:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:58.945 [2024-12-16 22:38:48.346385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:58.945 22:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:58.945 [2024-12-16 22:38:48.534954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:58.945 22:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:59.204 NVMe0n1 00:32:59.204 22:38:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:59.462 00:32:59.462 22:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:00.029 00:33:00.029 22:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:00.029 22:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:00.029 22:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:00.288 22:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:03.574 22:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:03.574 22:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:03.574 22:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:03.574 22:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=483231 00:33:03.574 22:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 483231 00:33:04.511 { 00:33:04.511 "results": [ 00:33:04.511 { 00:33:04.511 "job": "NVMe0n1", 00:33:04.511 "core_mask": "0x1", 00:33:04.511 "workload": "verify", 00:33:04.511 "status": "finished", 00:33:04.511 "verify_range": { 00:33:04.511 "start": 0, 00:33:04.511 "length": 16384 00:33:04.511 }, 00:33:04.511 "queue_depth": 128, 00:33:04.511 "io_size": 4096, 00:33:04.511 "runtime": 1.013775, 00:33:04.511 "iops": 11299.351433996695, 00:33:04.511 "mibps": 44.13809153904959, 00:33:04.511 "io_failed": 0, 00:33:04.511 "io_timeout": 0, 00:33:04.511 "avg_latency_us": 11287.587395481283, 00:33:04.511 "min_latency_us": 2356.175238095238, 00:33:04.511 "max_latency_us": 9611.946666666667 00:33:04.511 } 00:33:04.511 ], 00:33:04.511 "core_count": 1 00:33:04.511 } 00:33:04.511 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:04.511 [2024-12-16 22:38:47.980349] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:04.511 [2024-12-16 22:38:47.980400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482334 ] 00:33:04.511 [2024-12-16 22:38:48.055704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.511 [2024-12-16 22:38:48.075296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.511 [2024-12-16 22:38:49.848074] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:04.511 [2024-12-16 22:38:49.848121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:04.511 [2024-12-16 22:38:49.848132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.511 [2024-12-16 22:38:49.848141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:04.511 [2024-12-16 22:38:49.848148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.511 [2024-12-16 22:38:49.848155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:04.511 [2024-12-16 22:38:49.848162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.511 [2024-12-16 22:38:49.848169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:04.511 [2024-12-16 22:38:49.848175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.511 [2024-12-16 22:38:49.848182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:04.511 [2024-12-16 22:38:49.848212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:04.511 [2024-12-16 22:38:49.848227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cd3a0 (9): Bad file descriptor 00:33:04.511 [2024-12-16 22:38:49.857778] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:04.511 Running I/O for 1 seconds... 00:33:04.511 11234.00 IOPS, 43.88 MiB/s 00:33:04.511 Latency(us) 00:33:04.511 [2024-12-16T21:38:54.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.511 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:04.511 Verification LBA range: start 0x0 length 0x4000 00:33:04.511 NVMe0n1 : 1.01 11299.35 44.14 0.00 0.00 11287.59 2356.18 9611.95 00:33:04.511 [2024-12-16T21:38:54.212Z] =================================================================================================================== 00:33:04.511 [2024-12-16T21:38:54.212Z] Total : 11299.35 44.14 0.00 0.00 11287.59 2356.18 9611.95 00:33:04.511 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:04.511 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:04.769 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:05.028 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:05.028 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:05.286 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:05.286 22:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:08.572 22:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:08.572 22:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 482334 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 482334 ']' 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 482334 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482334 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482334' 00:33:08.572 killing process with pid 482334 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 482334 00:33:08.572 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 482334 00:33:08.831 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:08.831 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.090 rmmod nvme_tcp 00:33:09.090 rmmod nvme_fabrics 00:33:09.090 rmmod nvme_keyring 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 479596 ']' 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 479596 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479596 ']' 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479596 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479596 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479596' 00:33:09.090 killing process with pid 479596 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479596 00:33:09.090 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479596 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.350 22:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.886 22:39:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.886 00:33:11.886 real 0m36.956s 00:33:11.886 user 1m57.053s 00:33:11.886 sys 0m7.704s 00:33:11.886 22:39:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.886 22:39:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.886 ************************************ 00:33:11.886 END TEST nvmf_failover 00:33:11.886 ************************************ 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.886 ************************************ 00:33:11.886 START TEST nvmf_host_discovery 00:33:11.886 ************************************ 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:11.886 * Looking for test storage... 00:33:11.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:11.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.886 --rc genhtml_branch_coverage=1 00:33:11.886 --rc genhtml_function_coverage=1 00:33:11.886 --rc genhtml_legend=1 00:33:11.886 --rc geninfo_all_blocks=1 00:33:11.886 --rc geninfo_unexecuted_blocks=1 00:33:11.886 00:33:11.886 ' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:11.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.886 --rc genhtml_branch_coverage=1 00:33:11.886 --rc genhtml_function_coverage=1 00:33:11.886 --rc genhtml_legend=1 00:33:11.886 --rc geninfo_all_blocks=1 00:33:11.886 --rc geninfo_unexecuted_blocks=1 00:33:11.886 00:33:11.886 ' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:11.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.886 --rc genhtml_branch_coverage=1 00:33:11.886 --rc genhtml_function_coverage=1 00:33:11.886 --rc genhtml_legend=1 00:33:11.886 --rc geninfo_all_blocks=1 00:33:11.886 --rc geninfo_unexecuted_blocks=1 00:33:11.886 00:33:11.886 ' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:11.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.886 --rc genhtml_branch_coverage=1 00:33:11.886 --rc genhtml_function_coverage=1 00:33:11.886 --rc genhtml_legend=1 00:33:11.886 --rc geninfo_all_blocks=1 00:33:11.886 --rc geninfo_unexecuted_blocks=1 00:33:11.886 00:33:11.886 ' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.886 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.887 22:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:18.456 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.456 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:18.457 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:18.457 Found net devices under 0000:af:00.0: cvl_0_0 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:18.457 Found net devices under 0000:af:00.1: cvl_0_1 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.457 22:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:33:18.457 00:33:18.457 --- 10.0.0.2 ping statistics --- 00:33:18.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.457 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:33:18.457 00:33:18.457 --- 10.0.0.1 ping statistics --- 00:33:18.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.457 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=487982 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 487982 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 487982 ']' 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 [2024-12-16 22:39:07.216212] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:18.457 [2024-12-16 22:39:07.216258] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.457 [2024-12-16 22:39:07.295248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.457 [2024-12-16 22:39:07.316531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.457 [2024-12-16 22:39:07.316566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.457 [2024-12-16 22:39:07.316573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.457 [2024-12-16 22:39:07.316580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.457 [2024-12-16 22:39:07.316585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.457 [2024-12-16 22:39:07.317093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.457 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 [2024-12-16 22:39:07.459529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 [2024-12-16 22:39:07.471692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 null0 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 null1 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=488033 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 488033 /tmp/host.sock 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 488033 ']' 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:18.458 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 [2024-12-16 22:39:07.552320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:18.458 [2024-12-16 22:39:07.552361] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488033 ] 00:33:18.458 [2024-12-16 22:39:07.627244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.458 [2024-12-16 22:39:07.650343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.458 22:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.458 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.459 [2024-12-16 22:39:08.057206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:18.459 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:18.718 22:39:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:19.285 [2024-12-16 22:39:08.765394] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:19.285 [2024-12-16 22:39:08.765416] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:19.285 [2024-12-16 22:39:08.765428] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:19.285 [2024-12-16 22:39:08.891797] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:19.544 [2024-12-16 22:39:09.066778] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:19.544 [2024-12-16 22:39:09.067440] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1624f60:1 started. 00:33:19.544 [2024-12-16 22:39:09.068769] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:19.544 [2024-12-16 22:39:09.068783] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:19.544 [2024-12-16 22:39:09.074811] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1624f60 was disconnected and freed. delete nvme_qpair. 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.803 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:19.804 [2024-12-16 22:39:09.469217] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x160f020:1 started. 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.804 [2024-12-16 22:39:09.475917] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x160f020 was disconnected and freed. delete nvme_qpair. 00:33:19.804 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.063 [2024-12-16 22:39:09.569234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:20.063 [2024-12-16 22:39:09.569522] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:20.063 [2024-12-16 22:39:09.569542] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:20.063 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.064 [2024-12-16 22:39:09.695908] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:20.064 22:39:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:20.322 [2024-12-16 22:39:09.801782] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:20.322 [2024-12-16 22:39:09.801818] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:20.322 [2024-12-16 22:39:09.801826] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:20.322 [2024-12-16 22:39:09.801831] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.259 [2024-12-16 22:39:10.809399] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:21.259 [2024-12-16 22:39:10.809421] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:21.259 [2024-12-16 22:39:10.811167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.259 [2024-12-16 22:39:10.811185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.259 [2024-12-16 22:39:10.811197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.259 [2024-12-16 22:39:10.811204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.259 [2024-12-16 22:39:10.811211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.259 [2024-12-16 22:39:10.811217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.259 [2024-12-16 22:39:10.811224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.259 [2024-12-16 22:39:10.811231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.259 [2024-12-16 22:39:10.811253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:21.259 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.259 [2024-12-16 22:39:10.821178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.259 [2024-12-16 22:39:10.831215] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.259 [2024-12-16 22:39:10.831228] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.259 [2024-12-16 22:39:10.831235] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.259 [2024-12-16 22:39:10.831240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.259 [2024-12-16 22:39:10.831258] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.259 [2024-12-16 22:39:10.831451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.259 [2024-12-16 22:39:10.831465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.259 [2024-12-16 22:39:10.831473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.259 [2024-12-16 22:39:10.831485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.259 [2024-12-16 22:39:10.831502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.259 [2024-12-16 22:39:10.831509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.260 [2024-12-16 22:39:10.831517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.260 [2024-12-16 22:39:10.831523] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.260 [2024-12-16 22:39:10.831528] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.260 [2024-12-16 22:39:10.831533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.260 [2024-12-16 22:39:10.841288] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.260 [2024-12-16 22:39:10.841300] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.260 [2024-12-16 22:39:10.841308] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.841312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.260 [2024-12-16 22:39:10.841326] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.841492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.260 [2024-12-16 22:39:10.841503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.260 [2024-12-16 22:39:10.841510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.260 [2024-12-16 22:39:10.841521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.260 [2024-12-16 22:39:10.841537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.260 [2024-12-16 22:39:10.841544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.260 [2024-12-16 22:39:10.841551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.260 [2024-12-16 22:39:10.841557] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.260 [2024-12-16 22:39:10.841562] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.260 [2024-12-16 22:39:10.841566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.260 [2024-12-16 22:39:10.851357] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.260 [2024-12-16 22:39:10.851367] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.260 [2024-12-16 22:39:10.851371] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.851375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.260 [2024-12-16 22:39:10.851388] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.851481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.260 [2024-12-16 22:39:10.851492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.260 [2024-12-16 22:39:10.851499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.260 [2024-12-16 22:39:10.851508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.260 [2024-12-16 22:39:10.851517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.260 [2024-12-16 22:39:10.851523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.260 [2024-12-16 22:39:10.851530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.260 [2024-12-16 22:39:10.851535] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.260 [2024-12-16 22:39:10.851540] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.260 [2024-12-16 22:39:10.851543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:21.260 [2024-12-16 22:39:10.861419] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.260 [2024-12-16 22:39:10.861435] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.260 [2024-12-16 22:39:10.861439] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.861443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:21.260 [2024-12-16 22:39:10.861457] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.861635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.260 [2024-12-16 22:39:10.861655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.260 [2024-12-16 22:39:10.861662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.260 [2024-12-16 22:39:10.861673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.260 [2024-12-16 22:39:10.861689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.260 [2024-12-16 22:39:10.861699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.260 [2024-12-16 22:39:10.861708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.260 [2024-12-16 22:39:10.861713] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.260 [2024-12-16 22:39:10.861718] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.260 [2024-12-16 22:39:10.861722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.260 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:21.260 [2024-12-16 22:39:10.871487] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.260 [2024-12-16 22:39:10.871500] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.260 [2024-12-16 22:39:10.871505] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.871509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.260 [2024-12-16 22:39:10.871527] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.871677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.260 [2024-12-16 22:39:10.871689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.260 [2024-12-16 22:39:10.871696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.260 [2024-12-16 22:39:10.871706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.260 [2024-12-16 22:39:10.871715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.260 [2024-12-16 22:39:10.871721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.260 [2024-12-16 22:39:10.871728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.260 [2024-12-16 22:39:10.871733] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.260 [2024-12-16 22:39:10.871737] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.260 [2024-12-16 22:39:10.871741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.260 [2024-12-16 22:39:10.881558] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.260 [2024-12-16 22:39:10.881570] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.260 [2024-12-16 22:39:10.881574] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.881578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.260 [2024-12-16 22:39:10.881592] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.260 [2024-12-16 22:39:10.881824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.260 [2024-12-16 22:39:10.881836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.261 [2024-12-16 22:39:10.881844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.261 [2024-12-16 22:39:10.881854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.261 [2024-12-16 22:39:10.881871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.261 [2024-12-16 22:39:10.881878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.261 [2024-12-16 22:39:10.881885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.261 [2024-12-16 22:39:10.881891] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.261 [2024-12-16 22:39:10.881895] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.261 [2024-12-16 22:39:10.881899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.261 [2024-12-16 22:39:10.891622] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:21.261 [2024-12-16 22:39:10.891633] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:21.261 [2024-12-16 22:39:10.891637] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:21.261 [2024-12-16 22:39:10.891645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:21.261 [2024-12-16 22:39:10.891658] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.261 [2024-12-16 22:39:10.891811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.261 [2024-12-16 22:39:10.891821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f6ef0 with addr=10.0.0.2, port=4420 00:33:21.261 [2024-12-16 22:39:10.891828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f6ef0 is same with the state(6) to be set 00:33:21.261 [2024-12-16 22:39:10.891838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f6ef0 (9): Bad file descriptor 00:33:21.261 [2024-12-16 22:39:10.891848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.261 [2024-12-16 22:39:10.891854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.261 [2024-12-16 22:39:10.891861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.261 [2024-12-16 22:39:10.891867] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.261 [2024-12-16 22:39:10.891871] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.261 [2024-12-16 22:39:10.891875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.261 [2024-12-16 22:39:10.895915] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:21.261 [2024-12-16 22:39:10.895931] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:21.261 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.521 22:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.521 22:39:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.898 [2024-12-16 22:39:12.225670] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:22.898 [2024-12-16 22:39:12.225686] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:22.898 [2024-12-16 22:39:12.225704] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:22.898 [2024-12-16 22:39:12.352071] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:22.898 [2024-12-16 22:39:12.457738] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:22.898 [2024-12-16 22:39:12.458329] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x160d7d0:1 started. 00:33:22.898 [2024-12-16 22:39:12.459891] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:22.898 [2024-12-16 22:39:12.459916] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.898 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.898 request: 00:33:22.898 { 00:33:22.898 "name": "nvme", 00:33:22.898 "trtype": "tcp", 00:33:22.898 "traddr": "10.0.0.2", 00:33:22.898 "adrfam": "ipv4", 00:33:22.898 "trsvcid": "8009", 00:33:22.898 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:22.898 "wait_for_attach": true, 00:33:22.898 "method": "bdev_nvme_start_discovery", 00:33:22.898 "req_id": 1 00:33:22.898 } 00:33:22.898 Got JSON-RPC error response 00:33:22.898 response: 00:33:22.898 { 00:33:22.899 "code": -17, 00:33:22.899 "message": "File exists" 00:33:22.899 } 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.899 [2024-12-16 22:39:12.504769] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x160d7d0 was disconnected and freed. delete nvme_qpair. 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.899 request: 00:33:22.899 { 00:33:22.899 "name": "nvme_second", 00:33:22.899 "trtype": "tcp", 00:33:22.899 "traddr": "10.0.0.2", 00:33:22.899 "adrfam": "ipv4", 00:33:22.899 "trsvcid": "8009", 00:33:22.899 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:22.899 "wait_for_attach": true, 00:33:22.899 "method": "bdev_nvme_start_discovery", 00:33:22.899 "req_id": 1 00:33:22.899 } 00:33:22.899 Got JSON-RPC error response 00:33:22.899 response: 00:33:22.899 { 00:33:22.899 "code": -17, 00:33:22.899 "message": "File exists" 00:33:22.899 } 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.899 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.158 22:39:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:24.094 [2024-12-16 22:39:13.700802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:24.094 [2024-12-16 22:39:13.700826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16246f0 with addr=10.0.0.2, port=8010 00:33:24.094 [2024-12-16 22:39:13.700837] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:24.094 [2024-12-16 22:39:13.700843] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:24.094 [2024-12-16 22:39:13.700849] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:25.030 [2024-12-16 22:39:14.703219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.030 [2024-12-16 22:39:14.703242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16246f0 with addr=10.0.0.2, port=8010 00:33:25.030 [2024-12-16 22:39:14.703252] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:25.030 [2024-12-16 22:39:14.703258] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:25.030 [2024-12-16 22:39:14.703264] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:26.408 [2024-12-16 22:39:15.705401] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:26.408 request: 00:33:26.408 { 00:33:26.408 "name": "nvme_second", 00:33:26.408 "trtype": "tcp", 00:33:26.408 "traddr": "10.0.0.2", 00:33:26.408 "adrfam": "ipv4", 00:33:26.408 "trsvcid": "8010", 00:33:26.408 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:26.408 "wait_for_attach": false, 00:33:26.408 "attach_timeout_ms": 3000, 00:33:26.408 "method": "bdev_nvme_start_discovery", 00:33:26.408 "req_id": 1 00:33:26.408 } 00:33:26.408 Got JSON-RPC error response 00:33:26.408 response: 00:33:26.408 { 00:33:26.408 "code": -110, 00:33:26.408 "message": "Connection timed out" 00:33:26.408 } 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 488033 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.408 rmmod nvme_tcp 00:33:26.408 rmmod nvme_fabrics 00:33:26.408 rmmod nvme_keyring 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 487982 ']' 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 487982 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 487982 ']' 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 487982 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487982 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487982' 00:33:26.408 killing process with pid 487982 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 487982 00:33:26.408 22:39:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 487982 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.408 22:39:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.943 00:33:28.943 real 0m17.060s 00:33:28.943 user 0m20.250s 00:33:28.943 sys 0m5.812s 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:28.943 ************************************ 00:33:28.943 END TEST nvmf_host_discovery 00:33:28.943 ************************************ 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.943 ************************************ 00:33:28.943 START TEST nvmf_host_multipath_status 00:33:28.943 ************************************ 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:28.943 * Looking for test storage... 00:33:28.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.943 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:28.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.943 --rc genhtml_branch_coverage=1 00:33:28.943 --rc genhtml_function_coverage=1 00:33:28.944 --rc genhtml_legend=1 00:33:28.944 --rc geninfo_all_blocks=1 00:33:28.944 --rc geninfo_unexecuted_blocks=1 00:33:28.944 00:33:28.944 ' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.944 --rc genhtml_branch_coverage=1 00:33:28.944 --rc genhtml_function_coverage=1 00:33:28.944 --rc genhtml_legend=1 00:33:28.944 --rc geninfo_all_blocks=1 00:33:28.944 --rc geninfo_unexecuted_blocks=1 00:33:28.944 00:33:28.944 ' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.944 --rc genhtml_branch_coverage=1 00:33:28.944 --rc genhtml_function_coverage=1 00:33:28.944 --rc genhtml_legend=1 00:33:28.944 --rc geninfo_all_blocks=1 00:33:28.944 --rc geninfo_unexecuted_blocks=1 00:33:28.944 00:33:28.944 ' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.944 --rc genhtml_branch_coverage=1 00:33:28.944 --rc genhtml_function_coverage=1 00:33:28.944 --rc genhtml_legend=1 00:33:28.944 --rc geninfo_all_blocks=1 00:33:28.944 --rc geninfo_unexecuted_blocks=1 00:33:28.944 00:33:28.944 ' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:28.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.944 22:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:35.517 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:35.517 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:35.517 Found net devices under 0000:af:00.0: cvl_0_0 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:35.517 Found net devices under 0000:af:00.1: cvl_0_1 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:35.517 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.518 22:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:35.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:33:35.518 00:33:35.518 --- 10.0.0.2 ping statistics --- 00:33:35.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.518 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:35.518 00:33:35.518 --- 10.0.0.1 ping statistics --- 00:33:35.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.518 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=493108 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 493108 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493108 ']' 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:35.518 [2024-12-16 22:39:24.301132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:35.518 [2024-12-16 22:39:24.301182] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.518 [2024-12-16 22:39:24.380898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:35.518 [2024-12-16 22:39:24.402914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.518 [2024-12-16 22:39:24.402949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.518 [2024-12-16 22:39:24.402956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.518 [2024-12-16 22:39:24.402962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.518 [2024-12-16 22:39:24.402967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.518 [2024-12-16 22:39:24.404095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.518 [2024-12-16 22:39:24.404097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=493108 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:35.518 [2024-12-16 22:39:24.703521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:35.518 Malloc0 00:33:35.518 22:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:35.518 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:35.777 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.035 [2024-12-16 22:39:25.530837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.035 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:36.035 [2024-12-16 22:39:25.719319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=493355 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 493355 /var/tmp/bdevperf.sock 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493355 ']' 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:36.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:36.294 22:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:36.553 22:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:36.811 Nvme0n1 00:33:36.811 22:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:37.378 Nvme0n1 00:33:37.378 22:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:37.378 22:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:39.281 22:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:39.281 22:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:39.540 22:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:39.798 22:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:40.734 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:40.734 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:40.734 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.734 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:40.992 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.993 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:40.993 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.993 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.252 22:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.511 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.511 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:41.511 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.511 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.769 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.770 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:41.770 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.770 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:42.028 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.028 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:42.028 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:42.286 22:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:42.545 22:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:43.481 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:43.481 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:43.481 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.481 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.740 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.998 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.998 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.998 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.998 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.257 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.257 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:44.257 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.257 22:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.515 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.515 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:44.515 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:44.515 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.774 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.774 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:44.774 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:45.033 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:45.033 22:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.410 22:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:46.668 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.668 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:46.668 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.668 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.927 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:47.185 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.185 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:47.185 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.185 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:47.444 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.444 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:47.444 22:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:47.702 22:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:47.960 22:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:48.896 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:48.896 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:48.896 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.896 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:49.154 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:49.155 22:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.413 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.413 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:49.413 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.413 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.672 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.672 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:49.672 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.672 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.931 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.931 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:49.931 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.931 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:50.190 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.190 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:50.190 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:50.448 22:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:50.448 22:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.824 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.825 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.825 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:52.083 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.083 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:52.083 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.083 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:52.342 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.342 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:52.342 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.342 22:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:52.600 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:52.859 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:53.117 22:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:54.053 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:54.053 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:54.053 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.053 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:54.312 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.312 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:54.312 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.312 22:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:54.571 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.571 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:54.571 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.571 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.830 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:55.089 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.089 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:55.089 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.089 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:55.347 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.347 22:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:55.605 22:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:55.605 22:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:55.863 22:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:56.120 22:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:57.055 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:57.055 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:57.055 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.055 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.314 22:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:57.573 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.573 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:57.573 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.573 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:57.832 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.832 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:57.832 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:57.832 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.090 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.090 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:58.090 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.090 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:58.349 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.349 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:58.349 22:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:58.349 22:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:58.608 22:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:59.545 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:59.545 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:59.546 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.546 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:59.804 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:59.804 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:59.804 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.804 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:00.063 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.063 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:00.063 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:00.063 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.322 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.322 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:00.322 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.322 22:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:00.581 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.839 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:00.839 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.839 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:00.839 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:01.098 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:01.357 22:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:02.293 22:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:02.293 22:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:02.293 22:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.293 22:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:02.551 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.551 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:02.551 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.551 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:02.810 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.810 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:02.810 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.810 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:03.069 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.069 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:03.069 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.069 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:03.069 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.069 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:03.327 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.327 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:03.327 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.327 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:03.327 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.327 22:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:03.586 22:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.586 22:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:03.586 22:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:03.845 22:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:04.103 22:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:05.040 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:05.040 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:05.040 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.040 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:05.299 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.299 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:05.299 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.299 22:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:05.564 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.564 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:05.564 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.564 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.822 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:06.080 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:06.080 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:06.080 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:06.080 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 493355 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493355 ']' 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493355 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493355 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493355' 00:34:06.339 killing process with pid 493355 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493355 00:34:06.339 22:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493355 00:34:06.339 { 00:34:06.339 "results": [ 00:34:06.339 { 00:34:06.339 "job": "Nvme0n1", 00:34:06.339 "core_mask": "0x4", 00:34:06.339 "workload": "verify", 00:34:06.339 "status": "terminated", 00:34:06.339 "verify_range": { 00:34:06.339 "start": 0, 00:34:06.339 "length": 16384 00:34:06.339 }, 00:34:06.339 "queue_depth": 128, 00:34:06.339 "io_size": 4096, 00:34:06.339 "runtime": 28.930093, 00:34:06.339 "iops": 10602.281852325881, 00:34:06.339 "mibps": 41.41516348564797, 00:34:06.339 "io_failed": 0, 00:34:06.339 "io_timeout": 0, 00:34:06.339 "avg_latency_us": 12052.503844805919, 00:34:06.339 "min_latency_us": 733.3790476190476, 00:34:06.339 "max_latency_us": 3083812.083809524 00:34:06.339 } 00:34:06.339 ], 00:34:06.339 "core_count": 1 00:34:06.339 } 00:34:06.625 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 493355 00:34:06.625 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:06.625 [2024-12-16 22:39:25.793760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:06.625 [2024-12-16 22:39:25.793813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493355 ] 00:34:06.625 [2024-12-16 22:39:25.867467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.625 [2024-12-16 22:39:25.889764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.626 Running I/O for 90 seconds... 00:34:06.626 11344.00 IOPS, 44.31 MiB/s [2024-12-16T21:39:56.327Z] 11495.00 IOPS, 44.90 MiB/s [2024-12-16T21:39:56.327Z] 11464.33 IOPS, 44.78 MiB/s [2024-12-16T21:39:56.327Z] 11401.25 IOPS, 44.54 MiB/s [2024-12-16T21:39:56.327Z] 11446.00 IOPS, 44.71 MiB/s [2024-12-16T21:39:56.327Z] 11449.50 IOPS, 44.72 MiB/s [2024-12-16T21:39:56.327Z] 11457.29 IOPS, 44.76 MiB/s [2024-12-16T21:39:56.327Z] 11474.00 IOPS, 44.82 MiB/s [2024-12-16T21:39:56.327Z] 11490.33 IOPS, 44.88 MiB/s [2024-12-16T21:39:56.327Z] 11490.20 IOPS, 44.88 MiB/s [2024-12-16T21:39:56.327Z] 11490.91 IOPS, 44.89 MiB/s [2024-12-16T21:39:56.327Z] 11470.08 IOPS, 44.81 MiB/s [2024-12-16T21:39:56.327Z] [2024-12-16 22:39:39.894042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.894992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.894999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.626 [2024-12-16 22:39:39.895825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.626 [2024-12-16 22:39:39.895847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.626 [2024-12-16 22:39:39.895867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.626 [2024-12-16 22:39:39.895887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.626 [2024-12-16 22:39:39.895907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.626 [2024-12-16 22:39:39.895919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.626 [2024-12-16 22:39:39.895926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.895938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.895945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.895957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.895964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.895976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.895982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.895994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.896837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.896985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.896992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.627 [2024-12-16 22:39:39.897143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.897162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.627 [2024-12-16 22:39:39.897174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.627 [2024-12-16 22:39:39.897181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.897982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.897994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.628 [2024-12-16 22:39:39.898896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.628 [2024-12-16 22:39:39.898904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.898916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.898923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.898935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.898942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.898954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.898961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.898973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.898980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.898992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.899587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.899983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.899990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.900002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.900009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.900021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.900028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.900040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.629 [2024-12-16 22:39:39.900047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.900059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.629 [2024-12-16 22:39:39.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.629 [2024-12-16 22:39:39.911718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.911897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.911922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.911951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.911976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.911992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.630 [2024-12-16 22:39:39.912315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.912989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.913015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.913024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.913040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.913049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.913065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.630 [2024-12-16 22:39:39.913074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.630 [2024-12-16 22:39:39.913090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.913977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.913994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.914003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.914019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.914028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.914044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.914053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.914070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.914078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.914095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.914104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.915138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.631 [2024-12-16 22:39:39.915170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.631 [2024-12-16 22:39:39.915543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.631 [2024-12-16 22:39:39.915559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.915801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.915826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.915851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.915877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.915902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.915927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.915944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.915953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.632 [2024-12-16 22:39:39.916945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.916987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.916996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.917012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.917021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.917037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.917046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.917062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.917408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.632 [2024-12-16 22:39:39.917421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.632 [2024-12-16 22:39:39.917438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.917977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.917987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.918532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.918548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.633 [2024-12-16 22:39:39.924377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.633 [2024-12-16 22:39:39.924393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.924599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.924608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.925364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.925392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.925416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.925982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.925998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.926007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.634 [2024-12-16 22:39:39.926031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.634 [2024-12-16 22:39:39.926435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.634 [2024-12-16 22:39:39.926450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.635 [2024-12-16 22:39:39.926827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.926850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.926874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.926898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.926914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.926923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.927979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.927988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.635 [2024-12-16 22:39:39.928185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.635 [2024-12-16 22:39:39.928206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.928983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.928999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.636 [2024-12-16 22:39:39.929906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.929931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.929959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.929984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.929999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.636 [2024-12-16 22:39:39.930179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.636 [2024-12-16 22:39:39.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.930567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.930970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.930989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.931000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.931029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.637 [2024-12-16 22:39:39.931496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.931524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.931572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.931583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.932294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.932310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.932330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.932341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.932360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.932370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.932388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.637 [2024-12-16 22:39:39.932402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.637 [2024-12-16 22:39:39.932421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.932980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.932998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.638 [2024-12-16 22:39:39.933851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.638 [2024-12-16 22:39:39.933861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.933879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.933889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.933908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.933918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.933936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.933947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.933966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.933976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.933995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.934023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.934052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.934080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.934109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.934138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.934973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.934992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.935831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.935974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.935992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.639 [2024-12-16 22:39:39.936295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.639 [2024-12-16 22:39:39.936323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.639 [2024-12-16 22:39:39.936342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.640 [2024-12-16 22:39:39.936755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.936785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.936806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.936816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.937976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.937996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.640 [2024-12-16 22:39:39.938712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.640 [2024-12-16 22:39:39.938723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.938978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.938991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.939422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.939432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.940275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.940307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.940365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.940395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.641 [2024-12-16 22:39:39.940424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.641 [2024-12-16 22:39:39.940962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.641 [2024-12-16 22:39:39.940974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.940981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.940993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.940999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.642 [2024-12-16 22:39:39.941592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.941605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.941612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.642 [2024-12-16 22:39:39.942416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.642 [2024-12-16 22:39:39.942424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.942986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.942992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.643 [2024-12-16 22:39:39.943970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.643 [2024-12-16 22:39:39.943982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.643 [2024-12-16 22:39:39.943989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.644 [2024-12-16 22:39:39.944731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.944986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.944992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.945005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.945011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.644 [2024-12-16 22:39:39.945025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.644 [2024-12-16 22:39:39.945032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.945991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.945998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.645 [2024-12-16 22:39:39.946819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.645 [2024-12-16 22:39:39.946831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.946982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.946989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.646 [2024-12-16 22:39:39.947705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.646 [2024-12-16 22:39:39.947795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.646 [2024-12-16 22:39:39.947802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.947814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.947822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.947836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.947843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.647 [2024-12-16 22:39:39.948866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.948981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.948993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.647 [2024-12-16 22:39:39.949384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.647 [2024-12-16 22:39:39.949396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.949402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.949414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.949421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.949433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.949440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.949452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.949459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.949471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.949478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.949491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.949498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.950984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.950991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.951004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.951010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.951022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.951029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.951041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.648 [2024-12-16 22:39:39.951052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.951064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.648 [2024-12-16 22:39:39.951071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.648 [2024-12-16 22:39:39.951083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.648 [2024-12-16 22:39:39.951089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.649 [2024-12-16 22:39:39.951823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.649 [2024-12-16 22:39:39.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.649 [2024-12-16 22:39:39.951954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.951966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.951984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.951992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.650 [2024-12-16 22:39:39.952128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.952984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.952992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.650 [2024-12-16 22:39:39.953212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.650 [2024-12-16 22:39:39.953219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.953594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.953600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.651 [2024-12-16 22:39:39.954436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.651 [2024-12-16 22:39:39.954448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.954456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.954903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.652 [2024-12-16 22:39:39.954910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.652 [2024-12-16 22:39:39.955445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.652 [2024-12-16 22:39:39.955451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.653 [2024-12-16 22:39:39.955870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.955983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.955995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.956002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.956014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.956021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.956033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.956039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.956058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.956070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.956077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.956089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.956095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.959968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.959978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.653 [2024-12-16 22:39:39.960144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.653 [2024-12-16 22:39:39.960159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.960987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.960993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.961015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.654 [2024-12-16 22:39:39.961036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.654 [2024-12-16 22:39:39.961388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.654 [2024-12-16 22:39:39.961403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:39.961669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:39.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.655 11382.62 IOPS, 44.46 MiB/s [2024-12-16T21:39:56.356Z] 10569.57 IOPS, 41.29 MiB/s [2024-12-16T21:39:56.356Z] 9864.93 IOPS, 38.53 MiB/s [2024-12-16T21:39:56.356Z] 9272.69 IOPS, 36.22 MiB/s [2024-12-16T21:39:56.356Z] 9404.76 IOPS, 36.74 MiB/s [2024-12-16T21:39:56.356Z] 9512.39 IOPS, 37.16 MiB/s [2024-12-16T21:39:56.356Z] 9683.37 IOPS, 37.83 MiB/s [2024-12-16T21:39:56.356Z] 9872.35 IOPS, 38.56 MiB/s [2024-12-16T21:39:56.356Z] 10046.90 IOPS, 39.25 MiB/s [2024-12-16T21:39:56.356Z] 10116.41 IOPS, 39.52 MiB/s [2024-12-16T21:39:56.356Z] 10169.26 IOPS, 39.72 MiB/s [2024-12-16T21:39:56.356Z] 10229.58 IOPS, 39.96 MiB/s [2024-12-16T21:39:56.356Z] 10349.36 IOPS, 40.43 MiB/s [2024-12-16T21:39:56.356Z] 10465.85 IOPS, 40.88 MiB/s [2024-12-16T21:39:56.356Z] [2024-12-16 22:39:53.594076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.594866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.594983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.594995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.595002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.595051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.595070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.655 [2024-12-16 22:39:53.595090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.655 [2024-12-16 22:39:53.595445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.655 [2024-12-16 22:39:53.595452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.595471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.595684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.595703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.595721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.595830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:06.656 [2024-12-16 22:39:53.595837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.596252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.596267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.596284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.596291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.656 [2024-12-16 22:39:53.596304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.656 [2024-12-16 22:39:53.596311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.656 10543.33 IOPS, 41.18 MiB/s [2024-12-16T21:39:56.357Z] 10575.71 IOPS, 41.31 MiB/s [2024-12-16T21:39:56.357Z] Received shutdown signal, test time was about 28.930723 seconds 00:34:06.656 00:34:06.656 Latency(us) 00:34:06.656 [2024-12-16T21:39:56.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.656 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:06.656 Verification LBA range: start 0x0 length 0x4000 00:34:06.656 Nvme0n1 : 28.93 10602.28 41.42 0.00 0.00 12052.50 733.38 3083812.08 00:34:06.656 [2024-12-16T21:39:56.357Z] =================================================================================================================== 00:34:06.656 [2024-12-16T21:39:56.357Z] Total : 10602.28 41.42 0.00 0.00 12052.50 733.38 3083812.08 00:34:06.656 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:06.915 rmmod nvme_tcp 00:34:06.915 rmmod nvme_fabrics 00:34:06.915 rmmod nvme_keyring 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 493108 ']' 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 493108 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493108 ']' 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493108 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493108 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493108' 00:34:06.915 killing process with pid 493108 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493108 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493108 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.915 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.916 22:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:09.453 00:34:09.453 real 0m40.512s 00:34:09.453 user 1m50.031s 00:34:09.453 sys 0m11.602s 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:09.453 ************************************ 00:34:09.453 END TEST nvmf_host_multipath_status 00:34:09.453 ************************************ 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.453 ************************************ 00:34:09.453 START TEST nvmf_discovery_remove_ifc 00:34:09.453 ************************************ 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:09.453 * Looking for test storage... 00:34:09.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.453 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.454 --rc genhtml_branch_coverage=1 00:34:09.454 --rc genhtml_function_coverage=1 00:34:09.454 --rc genhtml_legend=1 00:34:09.454 --rc geninfo_all_blocks=1 00:34:09.454 --rc geninfo_unexecuted_blocks=1 00:34:09.454 00:34:09.454 ' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.454 --rc genhtml_branch_coverage=1 00:34:09.454 --rc genhtml_function_coverage=1 00:34:09.454 --rc genhtml_legend=1 00:34:09.454 --rc geninfo_all_blocks=1 00:34:09.454 --rc geninfo_unexecuted_blocks=1 00:34:09.454 00:34:09.454 ' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.454 --rc genhtml_branch_coverage=1 00:34:09.454 --rc genhtml_function_coverage=1 00:34:09.454 --rc genhtml_legend=1 00:34:09.454 --rc geninfo_all_blocks=1 00:34:09.454 --rc geninfo_unexecuted_blocks=1 00:34:09.454 00:34:09.454 ' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.454 --rc genhtml_branch_coverage=1 00:34:09.454 --rc genhtml_function_coverage=1 00:34:09.454 --rc genhtml_legend=1 00:34:09.454 --rc geninfo_all_blocks=1 00:34:09.454 --rc geninfo_unexecuted_blocks=1 00:34:09.454 00:34:09.454 ' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.454 22:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.023 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:16.024 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:16.024 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:16.024 Found net devices under 0000:af:00.0: cvl_0_0 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:16.024 Found net devices under 0000:af:00.1: cvl_0_1 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:34:16.024 00:34:16.024 --- 10.0.0.2 ping statistics --- 00:34:16.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.024 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:34:16.024 00:34:16.024 --- 10.0.0.1 ping statistics --- 00:34:16.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.024 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=501706 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 501706 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501706 ']' 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.024 22:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.024 [2024-12-16 22:40:04.913779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:16.024 [2024-12-16 22:40:04.913822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.024 [2024-12-16 22:40:04.988696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.024 [2024-12-16 22:40:05.009684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.024 [2024-12-16 22:40:05.009717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.024 [2024-12-16 22:40:05.009724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.024 [2024-12-16 22:40:05.009730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.024 [2024-12-16 22:40:05.009735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.024 [2024-12-16 22:40:05.010239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.024 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.024 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.025 [2024-12-16 22:40:05.152666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.025 [2024-12-16 22:40:05.160822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:16.025 null0 00:34:16.025 [2024-12-16 22:40:05.192835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=501861 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 501861 /tmp/host.sock 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501861 ']' 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:16.025 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.025 [2024-12-16 22:40:05.258670] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:16.025 [2024-12-16 22:40:05.258710] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501861 ] 00:34:16.025 [2024-12-16 22:40:05.331233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.025 [2024-12-16 22:40:05.354207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.025 22:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.961 [2024-12-16 22:40:06.509059] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:16.961 [2024-12-16 22:40:06.509080] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:16.961 [2024-12-16 22:40:06.509094] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:16.961 [2024-12-16 22:40:06.595360] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:17.219 [2024-12-16 22:40:06.811332] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:17.219 [2024-12-16 22:40:06.812103] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14c4b50:1 started. 00:34:17.219 [2024-12-16 22:40:06.813420] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:17.219 [2024-12-16 22:40:06.813459] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:17.219 [2024-12-16 22:40:06.813478] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:17.219 [2024-12-16 22:40:06.813490] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:17.219 [2024-12-16 22:40:06.813507] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.220 [2024-12-16 22:40:06.818231] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14c4b50 was disconnected and freed. delete nvme_qpair. 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:17.220 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:17.478 22:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.414 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:18.415 22:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:19.790 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:19.791 22:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:20.727 22:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:21.664 22:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.601 [2024-12-16 22:40:12.254954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:22.601 [2024-12-16 22:40:12.254993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.601 [2024-12-16 22:40:12.255004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.601 [2024-12-16 22:40:12.255013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.601 [2024-12-16 22:40:12.255020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.601 [2024-12-16 22:40:12.255027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.601 [2024-12-16 22:40:12.255034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.601 [2024-12-16 22:40:12.255040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.601 [2024-12-16 22:40:12.255047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.601 [2024-12-16 22:40:12.255054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.601 [2024-12-16 22:40:12.255061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.601 [2024-12-16 22:40:12.255068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1290 is same with the state(6) to be set 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:22.601 22:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:22.601 [2024-12-16 22:40:12.264975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1290 (9): Bad file descriptor 00:34:22.601 [2024-12-16 22:40:12.275012] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.601 [2024-12-16 22:40:12.275024] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.601 [2024-12-16 22:40:12.275030] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.601 [2024-12-16 22:40:12.275034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.601 [2024-12-16 22:40:12.275053] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.979 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.979 [2024-12-16 22:40:13.314290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:23.979 [2024-12-16 22:40:13.314369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1290 with addr=10.0.0.2, port=4420 00:34:23.979 [2024-12-16 22:40:13.314401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1290 is same with the state(6) to be set 00:34:23.979 [2024-12-16 22:40:13.314451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1290 (9): Bad file descriptor 00:34:23.979 [2024-12-16 22:40:13.315394] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:23.979 [2024-12-16 22:40:13.315457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:23.980 [2024-12-16 22:40:13.315480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:23.980 [2024-12-16 22:40:13.315503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:23.980 [2024-12-16 22:40:13.315524] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:23.980 [2024-12-16 22:40:13.315540] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:23.980 [2024-12-16 22:40:13.315554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:23.980 [2024-12-16 22:40:13.315575] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:23.980 [2024-12-16 22:40:13.315590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:23.980 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.980 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:23.980 22:40:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:24.917 [2024-12-16 22:40:14.318098] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:24.917 [2024-12-16 22:40:14.318116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:24.917 [2024-12-16 22:40:14.318126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:24.917 [2024-12-16 22:40:14.318133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:24.917 [2024-12-16 22:40:14.318140] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:24.917 [2024-12-16 22:40:14.318146] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:24.917 [2024-12-16 22:40:14.318150] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:24.918 [2024-12-16 22:40:14.318154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:24.918 [2024-12-16 22:40:14.318177] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:24.918 [2024-12-16 22:40:14.318201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.918 [2024-12-16 22:40:14.318210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.918 [2024-12-16 22:40:14.318218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.918 [2024-12-16 22:40:14.318225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.918 [2024-12-16 22:40:14.318232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.918 [2024-12-16 22:40:14.318238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.918 [2024-12-16 22:40:14.318245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.918 [2024-12-16 22:40:14.318252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.918 [2024-12-16 22:40:14.318259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:24.918 [2024-12-16 22:40:14.318265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.918 [2024-12-16 22:40:14.318272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:24.918 [2024-12-16 22:40:14.318687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14909e0 (9): Bad file descriptor 00:34:24.918 [2024-12-16 22:40:14.319700] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:24.918 [2024-12-16 22:40:14.319712] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:24.918 22:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:25.855 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.113 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:26.113 22:40:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:26.681 [2024-12-16 22:40:16.375652] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:26.681 [2024-12-16 22:40:16.375668] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:26.681 [2024-12-16 22:40:16.375680] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:26.940 [2024-12-16 22:40:16.461935] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:26.940 [2024-12-16 22:40:16.523381] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:26.940 [2024-12-16 22:40:16.523951] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x14a3540:1 started. 00:34:26.940 [2024-12-16 22:40:16.524954] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:26.940 [2024-12-16 22:40:16.524983] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:26.940 [2024-12-16 22:40:16.524999] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:26.940 [2024-12-16 22:40:16.525012] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:26.940 [2024-12-16 22:40:16.525019] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:26.940 [2024-12-16 22:40:16.533030] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x14a3540 was disconnected and freed. delete nvme_qpair. 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 501861 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501861 ']' 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501861 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:26.940 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501861 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501861' 00:34:27.200 killing process with pid 501861 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501861 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501861 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.200 rmmod nvme_tcp 00:34:27.200 rmmod nvme_fabrics 00:34:27.200 rmmod nvme_keyring 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 501706 ']' 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 501706 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501706 ']' 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501706 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.200 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501706 00:34:27.460 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:27.460 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:27.460 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501706' 00:34:27.460 killing process with pid 501706 00:34:27.460 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501706 00:34:27.460 22:40:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501706 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.460 22:40:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.999 00:34:29.999 real 0m20.403s 00:34:29.999 user 0m24.681s 00:34:29.999 sys 0m5.745s 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:29.999 ************************************ 00:34:29.999 END TEST nvmf_discovery_remove_ifc 00:34:29.999 ************************************ 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.999 ************************************ 00:34:29.999 START TEST nvmf_identify_kernel_target 00:34:29.999 ************************************ 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:29.999 * Looking for test storage... 00:34:29.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:29.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.999 --rc genhtml_branch_coverage=1 00:34:29.999 --rc genhtml_function_coverage=1 00:34:29.999 --rc genhtml_legend=1 00:34:29.999 --rc geninfo_all_blocks=1 00:34:29.999 --rc geninfo_unexecuted_blocks=1 00:34:29.999 00:34:29.999 ' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:29.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.999 --rc genhtml_branch_coverage=1 00:34:29.999 --rc genhtml_function_coverage=1 00:34:29.999 --rc genhtml_legend=1 00:34:29.999 --rc geninfo_all_blocks=1 00:34:29.999 --rc geninfo_unexecuted_blocks=1 00:34:29.999 00:34:29.999 ' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:29.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.999 --rc genhtml_branch_coverage=1 00:34:29.999 --rc genhtml_function_coverage=1 00:34:29.999 --rc genhtml_legend=1 00:34:29.999 --rc geninfo_all_blocks=1 00:34:29.999 --rc geninfo_unexecuted_blocks=1 00:34:29.999 00:34:29.999 ' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:29.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.999 --rc genhtml_branch_coverage=1 00:34:29.999 --rc genhtml_function_coverage=1 00:34:29.999 --rc genhtml_legend=1 00:34:29.999 --rc geninfo_all_blocks=1 00:34:29.999 --rc geninfo_unexecuted_blocks=1 00:34:29.999 00:34:29.999 ' 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:29.999 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.000 22:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:35.277 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.278 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:35.278 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.278 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:35.541 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:35.541 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.541 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:35.542 Found net devices under 0000:af:00.0: cvl_0_0 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:35.542 Found net devices under 0000:af:00.1: cvl_0_1 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.542 22:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.542 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:34:35.543 00:34:35.543 --- 10.0.0.2 ping statistics --- 00:34:35.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.543 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:34:35.543 00:34:35.543 --- 10.0.0.1 ping statistics --- 00:34:35.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.543 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.543 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:35.802 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:35.803 22:40:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:38.339 Waiting for block devices as requested 00:34:38.339 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:38.599 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:38.599 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:38.599 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:38.858 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:38.858 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:38.858 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:39.117 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:39.117 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:39.117 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:39.376 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:39.376 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:39.376 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:39.376 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:39.635 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:39.635 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:39.635 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:39.894 No valid GPT data, bailing 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:39.894 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:39.894 00:34:39.894 Discovery Log Number of Records 2, Generation counter 2 00:34:39.894 =====Discovery Log Entry 0====== 00:34:39.894 trtype: tcp 00:34:39.894 adrfam: ipv4 00:34:39.894 subtype: current discovery subsystem 00:34:39.894 treq: not specified, sq flow control disable supported 00:34:39.894 portid: 1 00:34:39.894 trsvcid: 4420 00:34:39.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:39.894 traddr: 10.0.0.1 00:34:39.894 eflags: none 00:34:39.895 sectype: none 00:34:39.895 =====Discovery Log Entry 1====== 00:34:39.895 trtype: tcp 00:34:39.895 adrfam: ipv4 00:34:39.895 subtype: nvme subsystem 00:34:39.895 treq: not specified, sq flow control disable supported 00:34:39.895 portid: 1 00:34:39.895 trsvcid: 4420 00:34:39.895 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:39.895 traddr: 10.0.0.1 00:34:39.895 eflags: none 00:34:39.895 sectype: none 00:34:39.895 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:39.895 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:40.155 ===================================================== 00:34:40.155 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:40.155 ===================================================== 00:34:40.155 Controller Capabilities/Features 00:34:40.155 ================================ 00:34:40.155 Vendor ID: 0000 00:34:40.155 Subsystem Vendor ID: 0000 00:34:40.155 Serial Number: 926699273c25bf19bdbb 00:34:40.155 Model Number: Linux 00:34:40.155 Firmware Version: 6.8.9-20 00:34:40.155 Recommended Arb Burst: 0 00:34:40.155 IEEE OUI Identifier: 00 00 00 00:34:40.155 Multi-path I/O 00:34:40.155 May have multiple subsystem ports: No 00:34:40.155 May have multiple controllers: No 00:34:40.155 Associated with SR-IOV VF: No 00:34:40.155 Max Data Transfer Size: Unlimited 00:34:40.155 Max Number of Namespaces: 0 00:34:40.155 Max Number of I/O Queues: 1024 00:34:40.155 NVMe Specification Version (VS): 1.3 00:34:40.155 NVMe Specification Version (Identify): 1.3 00:34:40.155 Maximum Queue Entries: 1024 00:34:40.155 Contiguous Queues Required: No 00:34:40.155 Arbitration Mechanisms Supported 00:34:40.155 Weighted Round Robin: Not Supported 00:34:40.155 Vendor Specific: Not Supported 00:34:40.155 Reset Timeout: 7500 ms 00:34:40.155 Doorbell Stride: 4 bytes 00:34:40.155 NVM Subsystem Reset: Not Supported 00:34:40.155 Command Sets Supported 00:34:40.155 NVM Command Set: Supported 00:34:40.155 Boot Partition: Not Supported 00:34:40.155 Memory Page Size Minimum: 4096 bytes 00:34:40.155 Memory Page Size Maximum: 4096 bytes 00:34:40.155 Persistent Memory Region: Not Supported 00:34:40.155 Optional Asynchronous Events Supported 00:34:40.155 Namespace Attribute Notices: Not Supported 00:34:40.155 Firmware Activation Notices: Not Supported 00:34:40.155 ANA Change Notices: Not Supported 00:34:40.155 PLE Aggregate Log Change Notices: Not Supported 00:34:40.155 LBA Status Info Alert Notices: Not Supported 00:34:40.155 EGE Aggregate Log Change Notices: Not Supported 00:34:40.155 Normal NVM Subsystem Shutdown event: Not Supported 00:34:40.155 Zone Descriptor Change Notices: Not Supported 00:34:40.155 Discovery Log Change Notices: Supported 00:34:40.155 Controller Attributes 00:34:40.155 128-bit Host Identifier: Not Supported 00:34:40.155 Non-Operational Permissive Mode: Not Supported 00:34:40.155 NVM Sets: Not Supported 00:34:40.155 Read Recovery Levels: Not Supported 00:34:40.155 Endurance Groups: Not Supported 00:34:40.155 Predictable Latency Mode: Not Supported 00:34:40.155 Traffic Based Keep ALive: Not Supported 00:34:40.155 Namespace Granularity: Not Supported 00:34:40.155 SQ Associations: Not Supported 00:34:40.155 UUID List: Not Supported 00:34:40.155 Multi-Domain Subsystem: Not Supported 00:34:40.155 Fixed Capacity Management: Not Supported 00:34:40.155 Variable Capacity Management: Not Supported 00:34:40.155 Delete Endurance Group: Not Supported 00:34:40.155 Delete NVM Set: Not Supported 00:34:40.155 Extended LBA Formats Supported: Not Supported 00:34:40.155 Flexible Data Placement Supported: Not Supported 00:34:40.155 00:34:40.155 Controller Memory Buffer Support 00:34:40.155 ================================ 00:34:40.155 Supported: No 00:34:40.155 00:34:40.155 Persistent Memory Region Support 00:34:40.155 ================================ 00:34:40.155 Supported: No 00:34:40.155 00:34:40.155 Admin Command Set Attributes 00:34:40.155 ============================ 00:34:40.155 Security Send/Receive: Not Supported 00:34:40.155 Format NVM: Not Supported 00:34:40.155 Firmware Activate/Download: Not Supported 00:34:40.155 Namespace Management: Not Supported 00:34:40.155 Device Self-Test: Not Supported 00:34:40.155 Directives: Not Supported 00:34:40.155 NVMe-MI: Not Supported 00:34:40.155 Virtualization Management: Not Supported 00:34:40.155 Doorbell Buffer Config: Not Supported 00:34:40.155 Get LBA Status Capability: Not Supported 00:34:40.155 Command & Feature Lockdown Capability: Not Supported 00:34:40.155 Abort Command Limit: 1 00:34:40.155 Async Event Request Limit: 1 00:34:40.155 Number of Firmware Slots: N/A 00:34:40.155 Firmware Slot 1 Read-Only: N/A 00:34:40.155 Firmware Activation Without Reset: N/A 00:34:40.155 Multiple Update Detection Support: N/A 00:34:40.155 Firmware Update Granularity: No Information Provided 00:34:40.155 Per-Namespace SMART Log: No 00:34:40.155 Asymmetric Namespace Access Log Page: Not Supported 00:34:40.155 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:40.155 Command Effects Log Page: Not Supported 00:34:40.155 Get Log Page Extended Data: Supported 00:34:40.155 Telemetry Log Pages: Not Supported 00:34:40.155 Persistent Event Log Pages: Not Supported 00:34:40.155 Supported Log Pages Log Page: May Support 00:34:40.155 Commands Supported & Effects Log Page: Not Supported 00:34:40.155 Feature Identifiers & Effects Log Page:May Support 00:34:40.155 NVMe-MI Commands & Effects Log Page: May Support 00:34:40.155 Data Area 4 for Telemetry Log: Not Supported 00:34:40.155 Error Log Page Entries Supported: 1 00:34:40.155 Keep Alive: Not Supported 00:34:40.155 00:34:40.155 NVM Command Set Attributes 00:34:40.155 ========================== 00:34:40.155 Submission Queue Entry Size 00:34:40.155 Max: 1 00:34:40.155 Min: 1 00:34:40.155 Completion Queue Entry Size 00:34:40.156 Max: 1 00:34:40.156 Min: 1 00:34:40.156 Number of Namespaces: 0 00:34:40.156 Compare Command: Not Supported 00:34:40.156 Write Uncorrectable Command: Not Supported 00:34:40.156 Dataset Management Command: Not Supported 00:34:40.156 Write Zeroes Command: Not Supported 00:34:40.156 Set Features Save Field: Not Supported 00:34:40.156 Reservations: Not Supported 00:34:40.156 Timestamp: Not Supported 00:34:40.156 Copy: Not Supported 00:34:40.156 Volatile Write Cache: Not Present 00:34:40.156 Atomic Write Unit (Normal): 1 00:34:40.156 Atomic Write Unit (PFail): 1 00:34:40.156 Atomic Compare & Write Unit: 1 00:34:40.156 Fused Compare & Write: Not Supported 00:34:40.156 Scatter-Gather List 00:34:40.156 SGL Command Set: Supported 00:34:40.156 SGL Keyed: Not Supported 00:34:40.156 SGL Bit Bucket Descriptor: Not Supported 00:34:40.156 SGL Metadata Pointer: Not Supported 00:34:40.156 Oversized SGL: Not Supported 00:34:40.156 SGL Metadata Address: Not Supported 00:34:40.156 SGL Offset: Supported 00:34:40.156 Transport SGL Data Block: Not Supported 00:34:40.156 Replay Protected Memory Block: Not Supported 00:34:40.156 00:34:40.156 Firmware Slot Information 00:34:40.156 ========================= 00:34:40.156 Active slot: 0 00:34:40.156 00:34:40.156 00:34:40.156 Error Log 00:34:40.156 ========= 00:34:40.156 00:34:40.156 Active Namespaces 00:34:40.156 ================= 00:34:40.156 Discovery Log Page 00:34:40.156 ================== 00:34:40.156 Generation Counter: 2 00:34:40.156 Number of Records: 2 00:34:40.156 Record Format: 0 00:34:40.156 00:34:40.156 Discovery Log Entry 0 00:34:40.156 ---------------------- 00:34:40.156 Transport Type: 3 (TCP) 00:34:40.156 Address Family: 1 (IPv4) 00:34:40.156 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:40.156 Entry Flags: 00:34:40.156 Duplicate Returned Information: 0 00:34:40.156 Explicit Persistent Connection Support for Discovery: 0 00:34:40.156 Transport Requirements: 00:34:40.156 Secure Channel: Not Specified 00:34:40.156 Port ID: 1 (0x0001) 00:34:40.156 Controller ID: 65535 (0xffff) 00:34:40.156 Admin Max SQ Size: 32 00:34:40.156 Transport Service Identifier: 4420 00:34:40.156 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:40.156 Transport Address: 10.0.0.1 00:34:40.156 Discovery Log Entry 1 00:34:40.156 ---------------------- 00:34:40.156 Transport Type: 3 (TCP) 00:34:40.156 Address Family: 1 (IPv4) 00:34:40.156 Subsystem Type: 2 (NVM Subsystem) 00:34:40.156 Entry Flags: 00:34:40.156 Duplicate Returned Information: 0 00:34:40.156 Explicit Persistent Connection Support for Discovery: 0 00:34:40.156 Transport Requirements: 00:34:40.156 Secure Channel: Not Specified 00:34:40.156 Port ID: 1 (0x0001) 00:34:40.156 Controller ID: 65535 (0xffff) 00:34:40.156 Admin Max SQ Size: 32 00:34:40.156 Transport Service Identifier: 4420 00:34:40.156 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:40.156 Transport Address: 10.0.0.1 00:34:40.156 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:40.156 get_feature(0x01) failed 00:34:40.156 get_feature(0x02) failed 00:34:40.156 get_feature(0x04) failed 00:34:40.156 ===================================================== 00:34:40.156 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:40.156 ===================================================== 00:34:40.156 Controller Capabilities/Features 00:34:40.156 ================================ 00:34:40.156 Vendor ID: 0000 00:34:40.156 Subsystem Vendor ID: 0000 00:34:40.156 Serial Number: f712202c93bd076e930c 00:34:40.156 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:40.156 Firmware Version: 6.8.9-20 00:34:40.156 Recommended Arb Burst: 6 00:34:40.156 IEEE OUI Identifier: 00 00 00 00:34:40.156 Multi-path I/O 00:34:40.156 May have multiple subsystem ports: Yes 00:34:40.156 May have multiple controllers: Yes 00:34:40.156 Associated with SR-IOV VF: No 00:34:40.156 Max Data Transfer Size: Unlimited 00:34:40.156 Max Number of Namespaces: 1024 00:34:40.156 Max Number of I/O Queues: 128 00:34:40.156 NVMe Specification Version (VS): 1.3 00:34:40.156 NVMe Specification Version (Identify): 1.3 00:34:40.156 Maximum Queue Entries: 1024 00:34:40.156 Contiguous Queues Required: No 00:34:40.156 Arbitration Mechanisms Supported 00:34:40.156 Weighted Round Robin: Not Supported 00:34:40.156 Vendor Specific: Not Supported 00:34:40.156 Reset Timeout: 7500 ms 00:34:40.156 Doorbell Stride: 4 bytes 00:34:40.156 NVM Subsystem Reset: Not Supported 00:34:40.156 Command Sets Supported 00:34:40.156 NVM Command Set: Supported 00:34:40.156 Boot Partition: Not Supported 00:34:40.156 Memory Page Size Minimum: 4096 bytes 00:34:40.156 Memory Page Size Maximum: 4096 bytes 00:34:40.156 Persistent Memory Region: Not Supported 00:34:40.156 Optional Asynchronous Events Supported 00:34:40.156 Namespace Attribute Notices: Supported 00:34:40.156 Firmware Activation Notices: Not Supported 00:34:40.156 ANA Change Notices: Supported 00:34:40.156 PLE Aggregate Log Change Notices: Not Supported 00:34:40.156 LBA Status Info Alert Notices: Not Supported 00:34:40.156 EGE Aggregate Log Change Notices: Not Supported 00:34:40.156 Normal NVM Subsystem Shutdown event: Not Supported 00:34:40.156 Zone Descriptor Change Notices: Not Supported 00:34:40.156 Discovery Log Change Notices: Not Supported 00:34:40.156 Controller Attributes 00:34:40.156 128-bit Host Identifier: Supported 00:34:40.156 Non-Operational Permissive Mode: Not Supported 00:34:40.156 NVM Sets: Not Supported 00:34:40.156 Read Recovery Levels: Not Supported 00:34:40.156 Endurance Groups: Not Supported 00:34:40.156 Predictable Latency Mode: Not Supported 00:34:40.156 Traffic Based Keep ALive: Supported 00:34:40.156 Namespace Granularity: Not Supported 00:34:40.156 SQ Associations: Not Supported 00:34:40.156 UUID List: Not Supported 00:34:40.156 Multi-Domain Subsystem: Not Supported 00:34:40.156 Fixed Capacity Management: Not Supported 00:34:40.156 Variable Capacity Management: Not Supported 00:34:40.156 Delete Endurance Group: Not Supported 00:34:40.156 Delete NVM Set: Not Supported 00:34:40.156 Extended LBA Formats Supported: Not Supported 00:34:40.156 Flexible Data Placement Supported: Not Supported 00:34:40.156 00:34:40.156 Controller Memory Buffer Support 00:34:40.156 ================================ 00:34:40.156 Supported: No 00:34:40.156 00:34:40.156 Persistent Memory Region Support 00:34:40.156 ================================ 00:34:40.156 Supported: No 00:34:40.156 00:34:40.156 Admin Command Set Attributes 00:34:40.156 ============================ 00:34:40.156 Security Send/Receive: Not Supported 00:34:40.156 Format NVM: Not Supported 00:34:40.156 Firmware Activate/Download: Not Supported 00:34:40.156 Namespace Management: Not Supported 00:34:40.156 Device Self-Test: Not Supported 00:34:40.156 Directives: Not Supported 00:34:40.156 NVMe-MI: Not Supported 00:34:40.156 Virtualization Management: Not Supported 00:34:40.156 Doorbell Buffer Config: Not Supported 00:34:40.156 Get LBA Status Capability: Not Supported 00:34:40.156 Command & Feature Lockdown Capability: Not Supported 00:34:40.156 Abort Command Limit: 4 00:34:40.156 Async Event Request Limit: 4 00:34:40.156 Number of Firmware Slots: N/A 00:34:40.156 Firmware Slot 1 Read-Only: N/A 00:34:40.156 Firmware Activation Without Reset: N/A 00:34:40.156 Multiple Update Detection Support: N/A 00:34:40.157 Firmware Update Granularity: No Information Provided 00:34:40.157 Per-Namespace SMART Log: Yes 00:34:40.157 Asymmetric Namespace Access Log Page: Supported 00:34:40.157 ANA Transition Time : 10 sec 00:34:40.157 00:34:40.157 Asymmetric Namespace Access Capabilities 00:34:40.157 ANA Optimized State : Supported 00:34:40.157 ANA Non-Optimized State : Supported 00:34:40.157 ANA Inaccessible State : Supported 00:34:40.157 ANA Persistent Loss State : Supported 00:34:40.157 ANA Change State : Supported 00:34:40.157 ANAGRPID is not changed : No 00:34:40.157 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:40.157 00:34:40.157 ANA Group Identifier Maximum : 128 00:34:40.157 Number of ANA Group Identifiers : 128 00:34:40.157 Max Number of Allowed Namespaces : 1024 00:34:40.157 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:40.157 Command Effects Log Page: Supported 00:34:40.157 Get Log Page Extended Data: Supported 00:34:40.157 Telemetry Log Pages: Not Supported 00:34:40.157 Persistent Event Log Pages: Not Supported 00:34:40.157 Supported Log Pages Log Page: May Support 00:34:40.157 Commands Supported & Effects Log Page: Not Supported 00:34:40.157 Feature Identifiers & Effects Log Page:May Support 00:34:40.157 NVMe-MI Commands & Effects Log Page: May Support 00:34:40.157 Data Area 4 for Telemetry Log: Not Supported 00:34:40.157 Error Log Page Entries Supported: 128 00:34:40.157 Keep Alive: Supported 00:34:40.157 Keep Alive Granularity: 1000 ms 00:34:40.157 00:34:40.157 NVM Command Set Attributes 00:34:40.157 ========================== 00:34:40.157 Submission Queue Entry Size 00:34:40.157 Max: 64 00:34:40.157 Min: 64 00:34:40.157 Completion Queue Entry Size 00:34:40.157 Max: 16 00:34:40.157 Min: 16 00:34:40.157 Number of Namespaces: 1024 00:34:40.157 Compare Command: Not Supported 00:34:40.157 Write Uncorrectable Command: Not Supported 00:34:40.157 Dataset Management Command: Supported 00:34:40.157 Write Zeroes Command: Supported 00:34:40.157 Set Features Save Field: Not Supported 00:34:40.157 Reservations: Not Supported 00:34:40.157 Timestamp: Not Supported 00:34:40.157 Copy: Not Supported 00:34:40.157 Volatile Write Cache: Present 00:34:40.157 Atomic Write Unit (Normal): 1 00:34:40.157 Atomic Write Unit (PFail): 1 00:34:40.157 Atomic Compare & Write Unit: 1 00:34:40.157 Fused Compare & Write: Not Supported 00:34:40.157 Scatter-Gather List 00:34:40.157 SGL Command Set: Supported 00:34:40.157 SGL Keyed: Not Supported 00:34:40.157 SGL Bit Bucket Descriptor: Not Supported 00:34:40.157 SGL Metadata Pointer: Not Supported 00:34:40.157 Oversized SGL: Not Supported 00:34:40.157 SGL Metadata Address: Not Supported 00:34:40.157 SGL Offset: Supported 00:34:40.157 Transport SGL Data Block: Not Supported 00:34:40.157 Replay Protected Memory Block: Not Supported 00:34:40.157 00:34:40.157 Firmware Slot Information 00:34:40.157 ========================= 00:34:40.157 Active slot: 0 00:34:40.157 00:34:40.157 Asymmetric Namespace Access 00:34:40.157 =========================== 00:34:40.157 Change Count : 0 00:34:40.157 Number of ANA Group Descriptors : 1 00:34:40.157 ANA Group Descriptor : 0 00:34:40.157 ANA Group ID : 1 00:34:40.157 Number of NSID Values : 1 00:34:40.157 Change Count : 0 00:34:40.157 ANA State : 1 00:34:40.157 Namespace Identifier : 1 00:34:40.157 00:34:40.157 Commands Supported and Effects 00:34:40.157 ============================== 00:34:40.157 Admin Commands 00:34:40.157 -------------- 00:34:40.157 Get Log Page (02h): Supported 00:34:40.157 Identify (06h): Supported 00:34:40.157 Abort (08h): Supported 00:34:40.157 Set Features (09h): Supported 00:34:40.157 Get Features (0Ah): Supported 00:34:40.157 Asynchronous Event Request (0Ch): Supported 00:34:40.157 Keep Alive (18h): Supported 00:34:40.157 I/O Commands 00:34:40.157 ------------ 00:34:40.157 Flush (00h): Supported 00:34:40.157 Write (01h): Supported LBA-Change 00:34:40.157 Read (02h): Supported 00:34:40.157 Write Zeroes (08h): Supported LBA-Change 00:34:40.157 Dataset Management (09h): Supported 00:34:40.157 00:34:40.157 Error Log 00:34:40.157 ========= 00:34:40.157 Entry: 0 00:34:40.157 Error Count: 0x3 00:34:40.157 Submission Queue Id: 0x0 00:34:40.157 Command Id: 0x5 00:34:40.157 Phase Bit: 0 00:34:40.157 Status Code: 0x2 00:34:40.157 Status Code Type: 0x0 00:34:40.157 Do Not Retry: 1 00:34:40.157 Error Location: 0x28 00:34:40.157 LBA: 0x0 00:34:40.157 Namespace: 0x0 00:34:40.157 Vendor Log Page: 0x0 00:34:40.157 ----------- 00:34:40.157 Entry: 1 00:34:40.157 Error Count: 0x2 00:34:40.157 Submission Queue Id: 0x0 00:34:40.157 Command Id: 0x5 00:34:40.157 Phase Bit: 0 00:34:40.157 Status Code: 0x2 00:34:40.157 Status Code Type: 0x0 00:34:40.157 Do Not Retry: 1 00:34:40.157 Error Location: 0x28 00:34:40.157 LBA: 0x0 00:34:40.157 Namespace: 0x0 00:34:40.157 Vendor Log Page: 0x0 00:34:40.157 ----------- 00:34:40.157 Entry: 2 00:34:40.157 Error Count: 0x1 00:34:40.157 Submission Queue Id: 0x0 00:34:40.157 Command Id: 0x4 00:34:40.157 Phase Bit: 0 00:34:40.157 Status Code: 0x2 00:34:40.157 Status Code Type: 0x0 00:34:40.157 Do Not Retry: 1 00:34:40.157 Error Location: 0x28 00:34:40.157 LBA: 0x0 00:34:40.157 Namespace: 0x0 00:34:40.157 Vendor Log Page: 0x0 00:34:40.157 00:34:40.157 Number of Queues 00:34:40.157 ================ 00:34:40.157 Number of I/O Submission Queues: 128 00:34:40.157 Number of I/O Completion Queues: 128 00:34:40.157 00:34:40.157 ZNS Specific Controller Data 00:34:40.157 ============================ 00:34:40.157 Zone Append Size Limit: 0 00:34:40.157 00:34:40.157 00:34:40.157 Active Namespaces 00:34:40.157 ================= 00:34:40.157 get_feature(0x05) failed 00:34:40.157 Namespace ID:1 00:34:40.157 Command Set Identifier: NVM (00h) 00:34:40.157 Deallocate: Supported 00:34:40.157 Deallocated/Unwritten Error: Not Supported 00:34:40.157 Deallocated Read Value: Unknown 00:34:40.157 Deallocate in Write Zeroes: Not Supported 00:34:40.157 Deallocated Guard Field: 0xFFFF 00:34:40.157 Flush: Supported 00:34:40.157 Reservation: Not Supported 00:34:40.157 Namespace Sharing Capabilities: Multiple Controllers 00:34:40.157 Size (in LBAs): 1953525168 (931GiB) 00:34:40.157 Capacity (in LBAs): 1953525168 (931GiB) 00:34:40.157 Utilization (in LBAs): 1953525168 (931GiB) 00:34:40.157 UUID: a581cb7e-206f-429b-a53a-079e2971ad68 00:34:40.157 Thin Provisioning: Not Supported 00:34:40.157 Per-NS Atomic Units: Yes 00:34:40.157 Atomic Boundary Size (Normal): 0 00:34:40.157 Atomic Boundary Size (PFail): 0 00:34:40.157 Atomic Boundary Offset: 0 00:34:40.157 NGUID/EUI64 Never Reused: No 00:34:40.157 ANA group ID: 1 00:34:40.157 Namespace Write Protected: No 00:34:40.157 Number of LBA Formats: 1 00:34:40.157 Current LBA Format: LBA Format #00 00:34:40.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:40.157 00:34:40.157 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:40.157 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:40.157 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:40.157 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:40.158 rmmod nvme_tcp 00:34:40.158 rmmod nvme_fabrics 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.158 22:40:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:42.697 22:40:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:45.234 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:45.234 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:46.172 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:46.172 00:34:46.172 real 0m16.491s 00:34:46.172 user 0m4.279s 00:34:46.172 sys 0m8.632s 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:46.172 ************************************ 00:34:46.172 END TEST nvmf_identify_kernel_target 00:34:46.172 ************************************ 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.172 ************************************ 00:34:46.172 START TEST nvmf_auth_host 00:34:46.172 ************************************ 00:34:46.172 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:46.432 * Looking for test storage... 00:34:46.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.432 --rc genhtml_branch_coverage=1 00:34:46.432 --rc genhtml_function_coverage=1 00:34:46.432 --rc genhtml_legend=1 00:34:46.432 --rc geninfo_all_blocks=1 00:34:46.432 --rc geninfo_unexecuted_blocks=1 00:34:46.432 00:34:46.432 ' 00:34:46.432 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.432 --rc genhtml_branch_coverage=1 00:34:46.432 --rc genhtml_function_coverage=1 00:34:46.432 --rc genhtml_legend=1 00:34:46.432 --rc geninfo_all_blocks=1 00:34:46.432 --rc geninfo_unexecuted_blocks=1 00:34:46.432 00:34:46.432 ' 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.433 --rc genhtml_branch_coverage=1 00:34:46.433 --rc genhtml_function_coverage=1 00:34:46.433 --rc genhtml_legend=1 00:34:46.433 --rc geninfo_all_blocks=1 00:34:46.433 --rc geninfo_unexecuted_blocks=1 00:34:46.433 00:34:46.433 ' 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:46.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:46.433 --rc genhtml_branch_coverage=1 00:34:46.433 --rc genhtml_function_coverage=1 00:34:46.433 --rc genhtml_legend=1 00:34:46.433 --rc geninfo_all_blocks=1 00:34:46.433 --rc geninfo_unexecuted_blocks=1 00:34:46.433 00:34:46.433 ' 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.433 22:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:46.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:46.433 22:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:53.005 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:53.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:53.005 Found net devices under 0000:af:00.0: cvl_0_0 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:53.005 Found net devices under 0000:af:00.1: cvl_0_1 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.005 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:34:53.006 00:34:53.006 --- 10.0.0.2 ping statistics --- 00:34:53.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.006 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:34:53.006 00:34:53.006 --- 10.0.0.1 ping statistics --- 00:34:53.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.006 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=513483 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 513483 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513483 ']' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.006 22:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f823a131c1f1a08526760fa38da763d4 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ULe 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f823a131c1f1a08526760fa38da763d4 0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f823a131c1f1a08526760fa38da763d4 0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f823a131c1f1a08526760fa38da763d4 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ULe 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ULe 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ULe 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22f7d44a9fb346d380061c3ebcdf2ba768677710cb8624e2af16736b88194a88 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cnx 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22f7d44a9fb346d380061c3ebcdf2ba768677710cb8624e2af16736b88194a88 3 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22f7d44a9fb346d380061c3ebcdf2ba768677710cb8624e2af16736b88194a88 3 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22f7d44a9fb346d380061c3ebcdf2ba768677710cb8624e2af16736b88194a88 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cnx 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cnx 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cnx 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ebf5d3b7f7fdadb83c0018af2909df69eab2cfbe85cf085 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7Pk 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ebf5d3b7f7fdadb83c0018af2909df69eab2cfbe85cf085 0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ebf5d3b7f7fdadb83c0018af2909df69eab2cfbe85cf085 0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ebf5d3b7f7fdadb83c0018af2909df69eab2cfbe85cf085 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7Pk 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7Pk 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7Pk 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ddd2d400db45bb2042521a0a2ec9cea7798a76529531f508 00:34:53.006 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zO7 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ddd2d400db45bb2042521a0a2ec9cea7798a76529531f508 2 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ddd2d400db45bb2042521a0a2ec9cea7798a76529531f508 2 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ddd2d400db45bb2042521a0a2ec9cea7798a76529531f508 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zO7 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zO7 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zO7 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6aba28332c569b33afc0f41f128eae1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xls 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6aba28332c569b33afc0f41f128eae1 1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6aba28332c569b33afc0f41f128eae1 1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6aba28332c569b33afc0f41f128eae1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xls 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xls 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xls 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1da6e8f620caa42c458141fc9a963a46 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.W5Q 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1da6e8f620caa42c458141fc9a963a46 1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1da6e8f620caa42c458141fc9a963a46 1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1da6e8f620caa42c458141fc9a963a46 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.W5Q 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.W5Q 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.W5Q 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4c2e7a8f582d115833f9ab66d340ae904d6a8ca4e38971c 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.p7v 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4c2e7a8f582d115833f9ab66d340ae904d6a8ca4e38971c 2 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4c2e7a8f582d115833f9ab66d340ae904d6a8ca4e38971c 2 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4c2e7a8f582d115833f9ab66d340ae904d6a8ca4e38971c 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.p7v 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.p7v 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.p7v 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f5fca859fb66bdda0229723d22d486ce 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Lmn 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f5fca859fb66bdda0229723d22d486ce 0 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f5fca859fb66bdda0229723d22d486ce 0 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f5fca859fb66bdda0229723d22d486ce 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:53.007 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Lmn 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Lmn 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Lmn 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f40320d5ed4fddc591ef7e5410b879634a1f6cd605fea2d62bc400df0a94f65e 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nIV 00:34:53.266 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f40320d5ed4fddc591ef7e5410b879634a1f6cd605fea2d62bc400df0a94f65e 3 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f40320d5ed4fddc591ef7e5410b879634a1f6cd605fea2d62bc400df0a94f65e 3 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f40320d5ed4fddc591ef7e5410b879634a1f6cd605fea2d62bc400df0a94f65e 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nIV 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nIV 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nIV 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 513483 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513483 ']' 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.267 22:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ULe 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cnx ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cnx 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7Pk 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zO7 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zO7 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xls 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.W5Q ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.W5Q 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.p7v 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Lmn ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Lmn 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nIV 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:53.526 22:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:56.060 Waiting for block devices as requested 00:34:56.318 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:56.318 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:56.318 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:56.576 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:56.576 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:56.576 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:56.576 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:56.834 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:56.835 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:56.835 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:57.093 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:57.093 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:57.093 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:57.093 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:57.352 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:57.352 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:57.352 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:57.919 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:58.178 No valid GPT data, bailing 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:58.178 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:58.178 00:34:58.178 Discovery Log Number of Records 2, Generation counter 2 00:34:58.178 =====Discovery Log Entry 0====== 00:34:58.178 trtype: tcp 00:34:58.178 adrfam: ipv4 00:34:58.178 subtype: current discovery subsystem 00:34:58.178 treq: not specified, sq flow control disable supported 00:34:58.178 portid: 1 00:34:58.178 trsvcid: 4420 00:34:58.178 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:58.178 traddr: 10.0.0.1 00:34:58.178 eflags: none 00:34:58.178 sectype: none 00:34:58.178 =====Discovery Log Entry 1====== 00:34:58.179 trtype: tcp 00:34:58.179 adrfam: ipv4 00:34:58.179 subtype: nvme subsystem 00:34:58.179 treq: not specified, sq flow control disable supported 00:34:58.179 portid: 1 00:34:58.179 trsvcid: 4420 00:34:58.179 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:58.179 traddr: 10.0.0.1 00:34:58.179 eflags: none 00:34:58.179 sectype: none 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.179 22:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 nvme0n1 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.438 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.697 nvme0n1 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.697 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.698 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.698 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.698 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.698 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.956 nvme0n1 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.956 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.215 nvme0n1 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:59.215 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.216 nvme0n1 00:34:59.216 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.475 22:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.475 nvme0n1 00:34:59.475 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.475 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.475 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.475 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.475 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.476 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.735 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.994 nvme0n1 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:34:59.994 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.995 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.254 nvme0n1 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.254 22:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.513 nvme0n1 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.513 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.772 nvme0n1 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.772 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.031 nvme0n1 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.031 22:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.599 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.858 nvme0n1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.858 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.118 nvme0n1 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.118 22:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.377 nvme0n1 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.377 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.636 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.895 nvme0n1 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.895 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.896 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.155 nvme0n1 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.155 22:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.531 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.791 nvme0n1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.791 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.359 nvme0n1 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.359 22:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.618 nvme0n1 00:35:05.618 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.618 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.618 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.618 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.618 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.876 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.135 nvme0n1 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.135 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.136 22:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.703 nvme0n1 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.703 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.704 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.271 nvme0n1 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.271 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.272 22:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.839 nvme0n1 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.839 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.098 22:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.665 nvme0n1 00:35:08.665 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.665 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.665 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.666 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.234 nvme0n1 00:35:09.234 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.234 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.234 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.234 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 22:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.803 nvme0n1 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.803 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.062 nvme0n1 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:10.062 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.063 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.322 nvme0n1 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.322 22:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.582 nvme0n1 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.582 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.841 nvme0n1 00:35:10.841 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.841 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.841 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.841 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 nvme0n1 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.842 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.101 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.101 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.101 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.101 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.102 nvme0n1 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.102 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.361 22:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.361 nvme0n1 00:35:11.361 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.361 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.361 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.361 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.361 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.361 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.620 nvme0n1 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.620 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.879 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.880 nvme0n1 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.880 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.139 nvme0n1 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.139 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.398 22:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 nvme0n1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.916 nvme0n1 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.916 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.917 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 nvme0n1 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 22:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.435 nvme0n1 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.435 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.694 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.953 nvme0n1 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.953 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.954 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.212 nvme0n1 00:35:14.212 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.212 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.212 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.212 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.212 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.213 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.213 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.213 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.213 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.213 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.471 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.471 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.471 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:14.471 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.472 22:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.731 nvme0n1 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.731 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.298 nvme0n1 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.298 22:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.556 nvme0n1 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.556 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:15.557 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.814 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.815 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.073 nvme0n1 00:35:16.073 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.073 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.073 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.073 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.073 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.073 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.074 22:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.641 nvme0n1 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.641 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.900 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.468 nvme0n1 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.468 22:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.468 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.036 nvme0n1 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.036 22:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.604 nvme0n1 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.604 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.171 nvme0n1 00:35:19.171 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.171 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.171 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.171 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.171 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.171 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.430 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.431 22:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 nvme0n1 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.431 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.690 nvme0n1 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:19.690 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.691 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.950 nvme0n1 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.950 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.210 nvme0n1 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.210 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 nvme0n1 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.469 22:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.469 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.470 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:20.470 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.470 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.728 nvme0n1 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:20.728 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.729 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.987 nvme0n1 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.987 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.988 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.246 nvme0n1 00:35:21.246 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.246 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.246 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.247 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.506 nvme0n1 00:35:21.506 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.506 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.506 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.506 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.506 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.506 22:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.506 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.766 nvme0n1 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.766 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.025 nvme0n1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.025 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.284 nvme0n1 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.284 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.285 22:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.544 nvme0n1 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.544 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.803 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.063 nvme0n1 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.063 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.322 nvme0n1 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.322 22:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.889 nvme0n1 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.889 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.890 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.149 nvme0n1 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.149 22:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.716 nvme0n1 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.716 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.717 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.975 nvme0n1 00:35:24.975 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.975 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.975 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.975 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.975 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.975 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.234 22:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.492 nvme0n1 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjgyM2ExMzFjMWYxYTA4NTI2NzYwZmEzOGRhNzYzZDSynGQy: 00:35:25.492 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: ]] 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJmN2Q0NGE5ZmIzNDZkMzgwMDYxYzNlYmNkZjJiYTc2ODY3NzcxMGNiODYyNGUyYWYxNjczNmI4ODE5NGE4OBa6YQI=: 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.493 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.058 nvme0n1 00:35:26.058 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.058 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.058 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.058 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.058 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.316 22:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.883 nvme0n1 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:26.883 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.884 22:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.450 nvme0n1 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRjMmU3YThmNTgyZDExNTgzM2Y5YWI2NmQzNDBhZTkwNGQ2YThjYTRlMzg5NzFjkT8aXA==: 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: ]] 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjVmY2E4NTlmYjY2YmRkYTAyMjk3MjNkMjJkNDg2Y2Xi4/9V: 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.450 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.451 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.385 nvme0n1 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:28.385 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjQwMzIwZDVlZDRmZGRjNTkxZWY3ZTU0MTBiODc5NjM0YTFmNmNkNjA1ZmVhMmQ2MmJjNDAwZGYwYTk0ZjY1ZXYRDVM=: 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.386 22:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.952 nvme0n1 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 request: 00:35:28.953 { 00:35:28.953 "name": "nvme0", 00:35:28.953 "trtype": "tcp", 00:35:28.953 "traddr": "10.0.0.1", 00:35:28.953 "adrfam": "ipv4", 00:35:28.953 "trsvcid": "4420", 00:35:28.953 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:28.953 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:28.953 "prchk_reftag": false, 00:35:28.953 "prchk_guard": false, 00:35:28.953 "hdgst": false, 00:35:28.953 "ddgst": false, 00:35:28.953 "allow_unrecognized_csi": false, 00:35:28.953 "method": "bdev_nvme_attach_controller", 00:35:28.953 "req_id": 1 00:35:28.953 } 00:35:28.953 Got JSON-RPC error response 00:35:28.953 response: 00:35:28.953 { 00:35:28.953 "code": -5, 00:35:28.953 "message": "Input/output error" 00:35:28.953 } 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.953 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 request: 00:35:28.953 { 00:35:28.953 "name": "nvme0", 00:35:28.953 "trtype": "tcp", 00:35:28.953 "traddr": "10.0.0.1", 00:35:28.953 "adrfam": "ipv4", 00:35:28.953 "trsvcid": "4420", 00:35:28.953 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:28.953 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:28.953 "prchk_reftag": false, 00:35:28.953 "prchk_guard": false, 00:35:28.953 "hdgst": false, 00:35:28.953 "ddgst": false, 00:35:28.953 "dhchap_key": "key2", 00:35:28.953 "allow_unrecognized_csi": false, 00:35:28.953 "method": "bdev_nvme_attach_controller", 00:35:28.953 "req_id": 1 00:35:28.953 } 00:35:28.953 Got JSON-RPC error response 00:35:28.953 response: 00:35:28.953 { 00:35:28.953 "code": -5, 00:35:28.953 "message": "Input/output error" 00:35:28.953 } 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.954 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:29.212 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.213 request: 00:35:29.213 { 00:35:29.213 "name": "nvme0", 00:35:29.213 "trtype": "tcp", 00:35:29.213 "traddr": "10.0.0.1", 00:35:29.213 "adrfam": "ipv4", 00:35:29.213 "trsvcid": "4420", 00:35:29.213 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:29.213 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:29.213 "prchk_reftag": false, 00:35:29.213 "prchk_guard": false, 00:35:29.213 "hdgst": false, 00:35:29.213 "ddgst": false, 00:35:29.213 "dhchap_key": "key1", 00:35:29.213 "dhchap_ctrlr_key": "ckey2", 00:35:29.213 "allow_unrecognized_csi": false, 00:35:29.213 "method": "bdev_nvme_attach_controller", 00:35:29.213 "req_id": 1 00:35:29.213 } 00:35:29.213 Got JSON-RPC error response 00:35:29.213 response: 00:35:29.213 { 00:35:29.213 "code": -5, 00:35:29.213 "message": "Input/output error" 00:35:29.213 } 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.213 nvme0n1 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.213 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.471 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.472 22:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.472 request: 00:35:29.472 { 00:35:29.472 "name": "nvme0", 00:35:29.472 "dhchap_key": "key1", 00:35:29.472 "dhchap_ctrlr_key": "ckey2", 00:35:29.472 "method": "bdev_nvme_set_keys", 00:35:29.472 "req_id": 1 00:35:29.472 } 00:35:29.472 Got JSON-RPC error response 00:35:29.472 response: 00:35:29.472 { 00:35:29.472 "code": -13, 00:35:29.472 "message": "Permission denied" 00:35:29.472 } 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:29.472 22:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:30.406 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.406 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:30.406 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.406 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.665 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.665 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:30.665 22:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViZjVkM2I3ZjdmZGFkYjgzYzAwMThhZjI5MDlkZjY5ZWFiMmNmYmU4NWNmMDg1fXfMiA==: 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: ]] 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGRkMmQ0MDBkYjQ1YmIyMDQyNTIxYTBhMmVjOWNlYTc3OThhNzY1Mjk1MzFmNTA4eBsJZA==: 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.599 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.857 nvme0n1 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.857 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZhYmEyODMzMmM1NjliMzNhZmMwZjQxZjEyOGVhZTFPORsW: 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: ]] 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWRhNmU4ZjYyMGNhYTQyYzQ1ODE0MWZjOWE5NjNhNDb+Fb/V: 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.858 request: 00:35:31.858 { 00:35:31.858 "name": "nvme0", 00:35:31.858 "dhchap_key": "key2", 00:35:31.858 "dhchap_ctrlr_key": "ckey1", 00:35:31.858 "method": "bdev_nvme_set_keys", 00:35:31.858 "req_id": 1 00:35:31.858 } 00:35:31.858 Got JSON-RPC error response 00:35:31.858 response: 00:35:31.858 { 00:35:31.858 "code": -13, 00:35:31.858 "message": "Permission denied" 00:35:31.858 } 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:31.858 22:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:32.791 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.791 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:32.791 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.791 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.791 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.049 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:33.049 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:33.049 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.050 rmmod nvme_tcp 00:35:33.050 rmmod nvme_fabrics 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 513483 ']' 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 513483 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 513483 ']' 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 513483 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513483 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513483' 00:35:33.050 killing process with pid 513483 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 513483 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 513483 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:33.050 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:33.308 22:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:35.214 22:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:38.504 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:38.504 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:39.073 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:39.073 22:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ULe /tmp/spdk.key-null.7Pk /tmp/spdk.key-sha256.xls /tmp/spdk.key-sha384.p7v /tmp/spdk.key-sha512.nIV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:39.073 22:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:42.367 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:42.367 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:42.367 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:42.367 00:35:42.367 real 0m55.772s 00:35:42.367 user 0m50.592s 00:35:42.367 sys 0m12.570s 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.367 ************************************ 00:35:42.367 END TEST nvmf_auth_host 00:35:42.367 ************************************ 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.367 ************************************ 00:35:42.367 START TEST nvmf_digest 00:35:42.367 ************************************ 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:42.367 * Looking for test storage... 00:35:42.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.367 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:42.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.368 --rc genhtml_branch_coverage=1 00:35:42.368 --rc genhtml_function_coverage=1 00:35:42.368 --rc genhtml_legend=1 00:35:42.368 --rc geninfo_all_blocks=1 00:35:42.368 --rc geninfo_unexecuted_blocks=1 00:35:42.368 00:35:42.368 ' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:42.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.368 --rc genhtml_branch_coverage=1 00:35:42.368 --rc genhtml_function_coverage=1 00:35:42.368 --rc genhtml_legend=1 00:35:42.368 --rc geninfo_all_blocks=1 00:35:42.368 --rc geninfo_unexecuted_blocks=1 00:35:42.368 00:35:42.368 ' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:42.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.368 --rc genhtml_branch_coverage=1 00:35:42.368 --rc genhtml_function_coverage=1 00:35:42.368 --rc genhtml_legend=1 00:35:42.368 --rc geninfo_all_blocks=1 00:35:42.368 --rc geninfo_unexecuted_blocks=1 00:35:42.368 00:35:42.368 ' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:42.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.368 --rc genhtml_branch_coverage=1 00:35:42.368 --rc genhtml_function_coverage=1 00:35:42.368 --rc genhtml_legend=1 00:35:42.368 --rc geninfo_all_blocks=1 00:35:42.368 --rc geninfo_unexecuted_blocks=1 00:35:42.368 00:35:42.368 ' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:42.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:42.368 22:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:48.935 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:48.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:48.935 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:48.936 Found net devices under 0000:af:00.0: cvl_0_0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:48.936 Found net devices under 0000:af:00.1: cvl_0_1 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:48.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:48.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:35:48.936 00:35:48.936 --- 10.0.0.2 ping statistics --- 00:35:48.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.936 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:48.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:48.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:35:48.936 00:35:48.936 --- 10.0.0.1 ping statistics --- 00:35:48.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.936 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:48.936 ************************************ 00:35:48.936 START TEST nvmf_digest_clean 00:35:48.936 ************************************ 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=527287 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 527287 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527287 ']' 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.936 [2024-12-16 22:41:37.798463] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:48.936 [2024-12-16 22:41:37.798510] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.936 [2024-12-16 22:41:37.877554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.936 [2024-12-16 22:41:37.899690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.936 [2024-12-16 22:41:37.899727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.936 [2024-12-16 22:41:37.899734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.936 [2024-12-16 22:41:37.899740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.936 [2024-12-16 22:41:37.899745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.936 [2024-12-16 22:41:37.900251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.936 22:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.936 null0 00:35:48.936 [2024-12-16 22:41:38.071914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.936 [2024-12-16 22:41:38.096101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.936 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.936 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527417 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527417 /var/tmp/bperf.sock 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527417 ']' 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.937 [2024-12-16 22:41:38.147259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:48.937 [2024-12-16 22:41:38.147299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527417 ] 00:35:48.937 [2024-12-16 22:41:38.201944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.937 [2024-12-16 22:41:38.223810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:48.937 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.195 nvme0n1 00:35:49.195 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:49.195 22:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:49.454 Running I/O for 2 seconds... 00:35:51.322 25216.00 IOPS, 98.50 MiB/s [2024-12-16T21:41:41.024Z] 24998.50 IOPS, 97.65 MiB/s 00:35:51.323 Latency(us) 00:35:51.323 [2024-12-16T21:41:41.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.323 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:51.323 nvme0n1 : 2.00 25015.85 97.72 0.00 0.00 5112.14 2449.80 13107.20 00:35:51.323 [2024-12-16T21:41:41.024Z] =================================================================================================================== 00:35:51.323 [2024-12-16T21:41:41.024Z] Total : 25015.85 97.72 0.00 0.00 5112.14 2449.80 13107.20 00:35:51.323 { 00:35:51.323 "results": [ 00:35:51.323 { 00:35:51.323 "job": "nvme0n1", 00:35:51.323 "core_mask": "0x2", 00:35:51.323 "workload": "randread", 00:35:51.323 "status": "finished", 00:35:51.323 "queue_depth": 128, 00:35:51.323 "io_size": 4096, 00:35:51.323 "runtime": 2.00373, 00:35:51.323 "iops": 25015.845448239033, 00:35:51.323 "mibps": 97.71814628218372, 00:35:51.323 "io_failed": 0, 00:35:51.323 "io_timeout": 0, 00:35:51.323 "avg_latency_us": 5112.137975822349, 00:35:51.323 "min_latency_us": 2449.7980952380954, 00:35:51.323 "max_latency_us": 13107.2 00:35:51.323 } 00:35:51.323 ], 00:35:51.323 "core_count": 1 00:35:51.323 } 00:35:51.323 22:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:51.323 22:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:51.323 22:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:51.323 22:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:51.323 | select(.opcode=="crc32c") 00:35:51.323 | "\(.module_name) \(.executed)"' 00:35:51.323 22:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527417 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527417 ']' 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527417 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527417 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527417' 00:35:51.581 killing process with pid 527417 00:35:51.581 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527417 00:35:51.581 Received shutdown signal, test time was about 2.000000 seconds 00:35:51.581 00:35:51.581 Latency(us) 00:35:51.581 [2024-12-16T21:41:41.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.581 [2024-12-16T21:41:41.283Z] =================================================================================================================== 00:35:51.582 [2024-12-16T21:41:41.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.582 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527417 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527881 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527881 /var/tmp/bperf.sock 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527881 ']' 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:51.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:51.840 [2024-12-16 22:41:41.372174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:51.840 [2024-12-16 22:41:41.372244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527881 ] 00:35:51.840 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:51.840 Zero copy mechanism will not be used. 00:35:51.840 [2024-12-16 22:41:41.444516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.840 [2024-12-16 22:41:41.466789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:51.840 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:52.099 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.099 22:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.666 nvme0n1 00:35:52.666 22:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:52.666 22:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:52.666 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:52.666 Zero copy mechanism will not be used. 00:35:52.666 Running I/O for 2 seconds... 00:35:54.977 5635.00 IOPS, 704.38 MiB/s [2024-12-16T21:41:44.678Z] 5753.50 IOPS, 719.19 MiB/s 00:35:54.977 Latency(us) 00:35:54.977 [2024-12-16T21:41:44.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.977 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:54.977 nvme0n1 : 2.00 5758.21 719.78 0.00 0.00 2775.82 643.66 4556.31 00:35:54.977 [2024-12-16T21:41:44.678Z] =================================================================================================================== 00:35:54.977 [2024-12-16T21:41:44.678Z] Total : 5758.21 719.78 0.00 0.00 2775.82 643.66 4556.31 00:35:54.977 { 00:35:54.977 "results": [ 00:35:54.977 { 00:35:54.977 "job": "nvme0n1", 00:35:54.977 "core_mask": "0x2", 00:35:54.977 "workload": "randread", 00:35:54.977 "status": "finished", 00:35:54.977 "queue_depth": 16, 00:35:54.977 "io_size": 131072, 00:35:54.977 "runtime": 2.003749, 00:35:54.977 "iops": 5758.2062423986235, 00:35:54.977 "mibps": 719.7757802998279, 00:35:54.977 "io_failed": 0, 00:35:54.977 "io_timeout": 0, 00:35:54.977 "avg_latency_us": 2775.8150007016156, 00:35:54.977 "min_latency_us": 643.6571428571428, 00:35:54.977 "max_latency_us": 4556.312380952381 00:35:54.977 } 00:35:54.977 ], 00:35:54.977 "core_count": 1 00:35:54.977 } 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:54.977 | select(.opcode=="crc32c") 00:35:54.977 | "\(.module_name) \(.executed)"' 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527881 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527881 ']' 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527881 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527881 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527881' 00:35:54.977 killing process with pid 527881 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527881 00:35:54.977 Received shutdown signal, test time was about 2.000000 seconds 00:35:54.977 00:35:54.977 Latency(us) 00:35:54.977 [2024-12-16T21:41:44.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.977 [2024-12-16T21:41:44.678Z] =================================================================================================================== 00:35:54.977 [2024-12-16T21:41:44.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527881 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528360 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528360 /var/tmp/bperf.sock 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528360 ']' 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.977 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 [2024-12-16 22:41:44.720519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:55.236 [2024-12-16 22:41:44.720568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528360 ] 00:35:55.236 [2024-12-16 22:41:44.792793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.236 [2024-12-16 22:41:44.812497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.236 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.236 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:55.236 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:55.236 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:55.236 22:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:55.494 22:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.494 22:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.059 nvme0n1 00:35:56.059 22:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:56.059 22:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:56.059 Running I/O for 2 seconds... 00:35:57.928 27486.00 IOPS, 107.37 MiB/s [2024-12-16T21:41:47.629Z] 27559.00 IOPS, 107.65 MiB/s 00:35:57.928 Latency(us) 00:35:57.928 [2024-12-16T21:41:47.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.928 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:57.928 nvme0n1 : 2.01 27559.30 107.65 0.00 0.00 4636.02 3495.25 7989.15 00:35:57.928 [2024-12-16T21:41:47.629Z] =================================================================================================================== 00:35:57.928 [2024-12-16T21:41:47.629Z] Total : 27559.30 107.65 0.00 0.00 4636.02 3495.25 7989.15 00:35:57.928 { 00:35:57.928 "results": [ 00:35:57.928 { 00:35:57.928 "job": "nvme0n1", 00:35:57.928 "core_mask": "0x2", 00:35:57.928 "workload": "randwrite", 00:35:57.928 "status": "finished", 00:35:57.928 "queue_depth": 128, 00:35:57.928 "io_size": 4096, 00:35:57.928 "runtime": 2.005784, 00:35:57.928 "iops": 27559.298508712804, 00:35:57.928 "mibps": 107.65350979965939, 00:35:57.928 "io_failed": 0, 00:35:57.928 "io_timeout": 0, 00:35:57.928 "avg_latency_us": 4636.018468037745, 00:35:57.928 "min_latency_us": 3495.2533333333336, 00:35:57.928 "max_latency_us": 7989.150476190476 00:35:57.928 } 00:35:57.928 ], 00:35:57.928 "core_count": 1 00:35:57.928 } 00:35:57.928 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:57.929 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:57.929 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:57.929 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:57.929 | select(.opcode=="crc32c") 00:35:57.929 | "\(.module_name) \(.executed)"' 00:35:57.929 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528360 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528360 ']' 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528360 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.187 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528360 00:35:58.445 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:58.445 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:58.445 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528360' 00:35:58.445 killing process with pid 528360 00:35:58.445 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528360 00:35:58.445 Received shutdown signal, test time was about 2.000000 seconds 00:35:58.445 00:35:58.445 Latency(us) 00:35:58.445 [2024-12-16T21:41:48.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.445 [2024-12-16T21:41:48.146Z] =================================================================================================================== 00:35:58.445 [2024-12-16T21:41:48.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:58.445 22:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528360 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=529027 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 529027 /var/tmp/bperf.sock 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 529027 ']' 00:35:58.445 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:58.446 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.446 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:58.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:58.446 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.446 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:58.446 [2024-12-16 22:41:48.090016] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:58.446 [2024-12-16 22:41:48.090064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529027 ] 00:35:58.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:58.446 Zero copy mechanism will not be used. 00:35:58.704 [2024-12-16 22:41:48.163121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.704 [2024-12-16 22:41:48.185184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.704 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.704 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:58.704 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:58.704 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:58.704 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:58.963 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.963 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:59.221 nvme0n1 00:35:59.221 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:59.221 22:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:59.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:59.479 Zero copy mechanism will not be used. 00:35:59.479 Running I/O for 2 seconds... 00:36:01.351 7466.00 IOPS, 933.25 MiB/s [2024-12-16T21:41:51.052Z] 7252.00 IOPS, 906.50 MiB/s 00:36:01.351 Latency(us) 00:36:01.351 [2024-12-16T21:41:51.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.351 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:01.351 nvme0n1 : 2.00 7247.41 905.93 0.00 0.00 2203.41 1810.04 5305.30 00:36:01.351 [2024-12-16T21:41:51.052Z] =================================================================================================================== 00:36:01.351 [2024-12-16T21:41:51.052Z] Total : 7247.41 905.93 0.00 0.00 2203.41 1810.04 5305.30 00:36:01.351 { 00:36:01.351 "results": [ 00:36:01.351 { 00:36:01.351 "job": "nvme0n1", 00:36:01.351 "core_mask": "0x2", 00:36:01.351 "workload": "randwrite", 00:36:01.351 "status": "finished", 00:36:01.351 "queue_depth": 16, 00:36:01.351 "io_size": 131072, 00:36:01.351 "runtime": 2.003473, 00:36:01.351 "iops": 7247.4148640885105, 00:36:01.351 "mibps": 905.9268580110638, 00:36:01.351 "io_failed": 0, 00:36:01.351 "io_timeout": 0, 00:36:01.351 "avg_latency_us": 2203.409160697888, 00:36:01.351 "min_latency_us": 1810.0419047619048, 00:36:01.351 "max_latency_us": 5305.295238095238 00:36:01.351 } 00:36:01.351 ], 00:36:01.351 "core_count": 1 00:36:01.351 } 00:36:01.351 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:01.351 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:01.351 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:01.351 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:01.351 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:01.351 | select(.opcode=="crc32c") 00:36:01.351 | "\(.module_name) \(.executed)"' 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 529027 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 529027 ']' 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 529027 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529027 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529027' 00:36:01.610 killing process with pid 529027 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 529027 00:36:01.610 Received shutdown signal, test time was about 2.000000 seconds 00:36:01.610 00:36:01.610 Latency(us) 00:36:01.610 [2024-12-16T21:41:51.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.610 [2024-12-16T21:41:51.311Z] =================================================================================================================== 00:36:01.610 [2024-12-16T21:41:51.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:01.610 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 529027 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 527287 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527287 ']' 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527287 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527287 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527287' 00:36:01.869 killing process with pid 527287 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527287 00:36:01.869 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527287 00:36:02.128 00:36:02.128 real 0m13.899s 00:36:02.128 user 0m26.632s 00:36:02.128 sys 0m4.548s 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:02.128 ************************************ 00:36:02.128 END TEST nvmf_digest_clean 00:36:02.128 ************************************ 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.128 ************************************ 00:36:02.128 START TEST nvmf_digest_error 00:36:02.128 ************************************ 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=529514 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 529514 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529514 ']' 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.128 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.128 [2024-12-16 22:41:51.768430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:02.128 [2024-12-16 22:41:51.768473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.387 [2024-12-16 22:41:51.844374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.387 [2024-12-16 22:41:51.865879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.387 [2024-12-16 22:41:51.865913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.387 [2024-12-16 22:41:51.865920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.387 [2024-12-16 22:41:51.865926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.387 [2024-12-16 22:41:51.865931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.387 [2024-12-16 22:41:51.866420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.387 [2024-12-16 22:41:51.958916] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.387 22:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.387 null0 00:36:02.387 [2024-12-16 22:41:52.045966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.387 [2024-12-16 22:41:52.070151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=529637 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 529637 /var/tmp/bperf.sock 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 529637 ']' 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:02.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.387 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.646 [2024-12-16 22:41:52.121541] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:02.646 [2024-12-16 22:41:52.121581] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529637 ] 00:36:02.646 [2024-12-16 22:41:52.195017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.646 [2024-12-16 22:41:52.217566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.646 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.646 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:02.646 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:02.646 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:02.905 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:02.905 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.905 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.905 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.905 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:02.905 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.163 nvme0n1 00:36:03.163 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:03.163 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.163 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.163 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.163 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:03.163 22:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:03.423 Running I/O for 2 seconds... 00:36:03.423 [2024-12-16 22:41:52.931442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.931476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.931486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:52.941796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.941820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.941829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:52.951479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.951501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.951509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:52.963222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.963244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.963253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:52.975401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.975421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.975430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:52.986111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.986131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.986139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:52.996493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:52.996513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:52.996521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.004926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.004947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.014906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.014926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.014935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.023822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.023842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.023849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.033104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.033124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.033132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.041915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.041934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.041942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.052729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.052748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.052756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.061403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.061422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.061430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.072825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.072847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.072855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.085042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.085063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.085075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.093743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.093764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.093773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.104371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.104392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.104400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.423 [2024-12-16 22:41:53.115133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.423 [2024-12-16 22:41:53.115153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.423 [2024-12-16 22:41:53.115161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.125937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.125958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.125966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.134848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.134868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.134876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.143442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.143460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.143468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.152433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.152452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.152460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.161915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.161934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.161942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.171004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.171027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.171035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.181128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.181147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.181155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.189692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.189710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.189718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.199040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.199059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.199067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.208618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.208640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.208647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.219356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.219376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.219384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.228781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.228800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.228808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.238017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.238038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.238046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.246750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.246769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.246777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.256672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.256692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.256699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.267184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.267211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.267219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.275373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.275392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.275400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.287834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.287854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.287862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.295795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.295814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.295822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.308004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.308024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.308032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.319700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.683 [2024-12-16 22:41:53.319721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.683 [2024-12-16 22:41:53.319729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.683 [2024-12-16 22:41:53.327567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.684 [2024-12-16 22:41:53.327587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.684 [2024-12-16 22:41:53.327595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.684 [2024-12-16 22:41:53.338314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.684 [2024-12-16 22:41:53.338340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.684 [2024-12-16 22:41:53.338348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.684 [2024-12-16 22:41:53.349402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.684 [2024-12-16 22:41:53.349423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.684 [2024-12-16 22:41:53.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.684 [2024-12-16 22:41:53.361779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.684 [2024-12-16 22:41:53.361799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.684 [2024-12-16 22:41:53.361807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.684 [2024-12-16 22:41:53.373263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.684 [2024-12-16 22:41:53.373283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.684 [2024-12-16 22:41:53.373290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.684 [2024-12-16 22:41:53.384686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.684 [2024-12-16 22:41:53.384706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.684 [2024-12-16 22:41:53.384714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.393631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.393651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.393658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.404933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.404953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.404961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.415343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.415363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.415371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.423291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.423311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.423319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.433363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.433383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.433392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.442418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.442438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.442446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.452588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.452609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.452617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.462366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.462396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.470756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.470775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.470783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.479725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.479745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.479754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.488784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.488805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.488813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.498954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.498974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.498981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.507793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.507814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.507825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.516691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.516711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.516719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.526003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.526022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.526031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.943 [2024-12-16 22:41:53.535937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.943 [2024-12-16 22:41:53.535958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.943 [2024-12-16 22:41:53.535966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.544226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.544246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.544255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.555999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.556019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.556027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.566293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.566312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.566320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.577055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.577075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.577083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.585000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.585020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.585028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.596502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.596526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.596534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.606009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.606029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.606037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.614013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.614035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.614043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.625694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.625715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.625723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:03.944 [2024-12-16 22:41:53.635826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:03.944 [2024-12-16 22:41:53.635846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.944 [2024-12-16 22:41:53.635853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.646920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.646942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.646951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.656263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.656284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.656292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.665817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.665846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.674131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.674151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.674159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.684021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.684041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.684049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.693067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.693087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.693094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.704095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.704114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.704122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.712363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.712384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.712391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.724104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.724123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.724131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.733403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.733422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.733430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.742122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.742142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.742150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.752833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.752853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.752861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.761022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.761042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.761053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.773705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.773725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.773732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.784364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.784384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.203 [2024-12-16 22:41:53.784392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.203 [2024-12-16 22:41:53.793999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.203 [2024-12-16 22:41:53.794019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.794026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.803134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.803154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.803162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.815358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.815378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.815385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.826399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.826419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.826427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.835965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.835984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.835991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.846001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.846020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.846028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.854714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.854734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.854742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.863434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.863454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.863461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.875458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.875478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.875486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.884659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.884678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.884686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.893960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.893980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.893988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.204 [2024-12-16 22:41:53.903138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.204 [2024-12-16 22:41:53.903158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.204 [2024-12-16 22:41:53.903166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.912383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.912402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.912410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 25682.00 IOPS, 100.32 MiB/s [2024-12-16T21:41:54.238Z] [2024-12-16 22:41:53.921998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.922019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.922026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.931507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.931527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.931538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.940931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.940950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.940958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.950222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.950241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.950249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.959293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.959313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.959322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.968615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.968635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.968642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.978826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.978845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.978853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.986994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.987013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.987021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:53.997852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:53.997872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:53.997880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.008373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.008391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.008400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.017048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.026394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.026414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.026422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.035641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.035660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.035668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.043987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.044007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.044015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.053840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.053859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.053867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.062219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.062237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.062245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.072695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.072715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.072722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.084961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.537 [2024-12-16 22:41:54.084981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.537 [2024-12-16 22:41:54.084988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.537 [2024-12-16 22:41:54.097060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.097081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.097089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.105384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.105404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.105412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.117425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.117446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.117453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.128583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.128602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.128610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.142220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.142240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.142248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.150473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.150492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.150500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.162066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.162085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.162093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.175087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.175107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.175115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.188165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.188185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.188199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.200730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.200750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.200761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.209073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.209096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.209105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.220701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.220721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.220731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.538 [2024-12-16 22:41:54.232377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.538 [2024-12-16 22:41:54.232397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.538 [2024-12-16 22:41:54.232405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.243138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.243159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.243167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.252678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.252698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.252706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.261329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.261349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.261357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.271328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.271348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.271356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.281155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.281176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.281184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.291614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.291634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.291642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.303070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.303090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.303098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.311333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.311353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.311360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.323026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.323046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.323054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.331927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.331946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.331955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.343350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.343371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.343379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.355266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.355287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.355295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.366862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.366882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.366891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.377749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.377768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.377780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.386147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.386166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.386174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.397067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.397087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.397095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.405302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.405321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.405329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.415132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.415153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.415161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.426303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.426323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.426331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.815 [2024-12-16 22:41:54.436043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.815 [2024-12-16 22:41:54.436063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.815 [2024-12-16 22:41:54.436071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.445503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.445523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.445531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.453754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.453773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.453781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.465058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.465081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.465089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.477645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.477665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.477672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.486030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.486050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.495886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.495906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.495914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.506143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.506163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.506170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:04.816 [2024-12-16 22:41:54.515593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:04.816 [2024-12-16 22:41:54.515613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.816 [2024-12-16 22:41:54.515622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.525223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.525243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.525252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.534317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.534338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.534346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.544660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.544687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.544695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.552658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.552678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.552685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.562734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.562753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.562761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.573882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.573902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.573910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.581737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.581757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.581766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.591537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.591556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.591564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.601325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.601345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.601353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.610991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.611012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.611019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.619285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.619305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.619313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.630565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.630585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.630598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.639162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.082 [2024-12-16 22:41:54.639181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.082 [2024-12-16 22:41:54.639189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.082 [2024-12-16 22:41:54.650727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.650746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.650754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.660241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.660261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.660269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.671028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.671048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.671055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.679437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.679456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.679464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.691473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.691493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.691501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.699730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.699750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.699758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.710655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.710675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.710682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.720740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.720760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.720768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.731437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.731465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.741885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.741905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.741914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.751159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.751180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.751188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.762954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.762975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.762983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.083 [2024-12-16 22:41:54.773652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.083 [2024-12-16 22:41:54.773673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.083 [2024-12-16 22:41:54.773681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.782440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.782462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.782471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.793479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.793501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.793509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.803499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.803519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.813163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.813184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.813196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.821412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.821433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.821441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.832236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.832256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.832264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.840901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.840921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.840929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.850637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.850657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.850665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.858565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.858585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.858593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.868887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.868908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.868916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.878534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.878554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.878562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.886759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.886783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.886790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.896640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.896659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.896667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.907714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.907734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.907742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 [2024-12-16 22:41:54.920276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x212b8c0) 00:36:05.352 [2024-12-16 22:41:54.920297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.352 [2024-12-16 22:41:54.920304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:05.352 25532.50 IOPS, 99.74 MiB/s 00:36:05.352 Latency(us) 00:36:05.352 [2024-12-16T21:41:55.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.352 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:05.352 nvme0n1 : 2.00 25555.58 99.83 0.00 0.00 5003.28 2637.04 16477.62 00:36:05.352 [2024-12-16T21:41:55.053Z] =================================================================================================================== 00:36:05.352 [2024-12-16T21:41:55.053Z] Total : 25555.58 99.83 0.00 0.00 5003.28 2637.04 16477.62 00:36:05.352 { 00:36:05.352 "results": [ 00:36:05.352 { 00:36:05.352 "job": "nvme0n1", 00:36:05.352 "core_mask": "0x2", 00:36:05.352 "workload": "randread", 00:36:05.352 "status": "finished", 00:36:05.352 "queue_depth": 128, 00:36:05.352 "io_size": 4096, 00:36:05.352 "runtime": 2.00465, 00:36:05.352 "iops": 25555.58326889981, 00:36:05.352 "mibps": 99.82649714413988, 00:36:05.352 "io_failed": 0, 00:36:05.352 "io_timeout": 0, 00:36:05.352 "avg_latency_us": 5003.281113781918, 00:36:05.352 "min_latency_us": 2637.0438095238096, 00:36:05.352 "max_latency_us": 16477.62285714286 00:36:05.352 } 00:36:05.353 ], 00:36:05.353 "core_count": 1 00:36:05.353 } 00:36:05.353 22:41:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:05.353 22:41:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:05.353 22:41:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:05.353 | .driver_specific 00:36:05.353 | .nvme_error 00:36:05.353 | .status_code 00:36:05.353 | .command_transient_transport_error' 00:36:05.353 22:41:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 529637 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529637 ']' 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529637 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529637 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529637' 00:36:05.619 killing process with pid 529637 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529637 00:36:05.619 Received shutdown signal, test time was about 2.000000 seconds 00:36:05.619 00:36:05.619 Latency(us) 00:36:05.619 [2024-12-16T21:41:55.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.619 [2024-12-16T21:41:55.320Z] =================================================================================================================== 00:36:05.619 [2024-12-16T21:41:55.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.619 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529637 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530213 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530213 /var/tmp/bperf.sock 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530213 ']' 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:05.880 [2024-12-16 22:41:55.398671] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:05.880 [2024-12-16 22:41:55.398720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530213 ] 00:36:05.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:05.880 Zero copy mechanism will not be used. 00:36:05.880 [2024-12-16 22:41:55.471140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.880 [2024-12-16 22:41:55.493503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:05.880 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:06.142 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:06.142 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.142 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.142 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.142 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.142 22:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.406 nvme0n1 00:36:06.406 22:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:06.406 22:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.406 22:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.406 22:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.406 22:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:06.406 22:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:06.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:06.674 Zero copy mechanism will not be used. 00:36:06.674 Running I/O for 2 seconds... 00:36:06.674 [2024-12-16 22:41:56.191228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.191264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.191275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.196685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.196712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.196720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.201671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.201696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.201705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.206619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.206641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.206650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.211479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.211502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.211511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.216386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.216408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.216416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.221685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.221707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.221715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.226769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.226791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.226799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.232004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.232029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.232037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.237152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.237173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.237181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.242250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.242273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.242281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.247313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.247334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.247343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.252384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.252406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.252417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.257445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.257466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.257474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.262481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.262502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.262509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.267587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.267608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.267616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.272633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.272654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.272663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.277746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.277768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.277776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.281141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.281162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.281170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.286121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.286142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.286150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.291789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.291811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.291818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.298657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.298678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.298687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.305975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.305997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.306006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.312727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.312749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.312757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.320232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.320255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.320263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.328187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.674 [2024-12-16 22:41:56.328216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.674 [2024-12-16 22:41:56.328224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.674 [2024-12-16 22:41:56.335683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.675 [2024-12-16 22:41:56.335705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.675 [2024-12-16 22:41:56.335714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.675 [2024-12-16 22:41:56.342969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.675 [2024-12-16 22:41:56.342991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.675 [2024-12-16 22:41:56.342999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.675 [2024-12-16 22:41:56.350212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.675 [2024-12-16 22:41:56.350234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.675 [2024-12-16 22:41:56.350242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.675 [2024-12-16 22:41:56.357980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.675 [2024-12-16 22:41:56.358002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.675 [2024-12-16 22:41:56.358014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.675 [2024-12-16 22:41:56.365697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.675 [2024-12-16 22:41:56.365719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.675 [2024-12-16 22:41:56.365727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.675 [2024-12-16 22:41:56.373006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.675 [2024-12-16 22:41:56.373028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.675 [2024-12-16 22:41:56.373037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.380497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.380522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.380531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.388113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.388136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.388144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.395920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.395942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.395951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.403434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.403456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.403465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.411096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.411119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.411129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.418770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.418793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.418802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.943 [2024-12-16 22:41:56.426158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.943 [2024-12-16 22:41:56.426188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.943 [2024-12-16 22:41:56.426202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.431944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.431966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.431975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.437207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.437228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.437236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.443201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.443222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.443231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.449568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.449589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.449598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.456770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.456793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.456801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.462552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.462573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.462582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.468096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.468117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.468125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.473965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.473987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.473995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.479528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.479549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.479557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.485189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.485215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.485223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.490559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.490580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.490588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.495888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.495908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.495916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.501251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.501270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.501278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.506622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.506644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.506652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.511852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.511873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.511881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.517997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.518019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.518026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.524292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.524313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.524323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.529640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.529661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.529669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.534910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.534936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.534944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.540359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.540380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.540388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.545759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.545780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.545788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.551062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.551083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.551091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.556324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.556344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.556352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.561609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.561630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.561637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.567838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.567859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.567866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.573426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.573451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.573458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.578750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.944 [2024-12-16 22:41:56.578771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.944 [2024-12-16 22:41:56.578779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.944 [2024-12-16 22:41:56.583907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.583928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.583935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.589091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.589112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.589120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.594145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.594165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.594173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.599384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.599405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.599412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.604784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.604805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.604813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.610126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.610146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.610153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.615538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.615559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.615567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.620725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.620746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.620753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.626053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.626074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.626082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.631984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.632006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.632014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:06.945 [2024-12-16 22:41:56.637228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:06.945 [2024-12-16 22:41:56.637249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.945 [2024-12-16 22:41:56.637257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.642870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.642891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.642900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.648517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.648538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.654035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.654057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.654066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.656967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.656995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.662944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.662964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.662975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.667651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.667671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.667679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.673070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.673089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.673096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.678448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.678468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.678476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.683770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.683790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.683798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.689097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.689116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.689124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.694337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.694356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.694364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.699825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.699845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.699854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.705112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.216 [2024-12-16 22:41:56.705131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.216 [2024-12-16 22:41:56.705139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.216 [2024-12-16 22:41:56.710489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.710512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.715857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.715877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.715885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.721232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.721251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.721259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.726692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.726711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.726719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.732447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.732466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.732474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.738393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.738412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.738421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.743761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.743780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.743788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.748496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.748516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.753877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.753898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.753909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.759185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.759210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.759218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.764529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.764549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.764556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.770041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.770061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.770068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.775373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.775392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.775401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.780772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.780791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.780800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.786563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.786582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.786590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.792230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.792249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.792258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.797487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.797507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.797515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.802789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.802812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.802820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.808008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.808027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.808035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.813296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.813316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.813324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.819460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.819479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.819487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.824011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.824031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.824039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.829278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.829297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.829305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.834460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.834480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.834488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.839638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.839657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.217 [2024-12-16 22:41:56.839665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.217 [2024-12-16 22:41:56.844572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.217 [2024-12-16 22:41:56.844591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.844599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.849824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.849844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.849852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.855095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.855119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.855127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.860219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.860238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.860246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.865329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.865348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.865356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.870781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.870800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.870808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.876040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.876059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.876067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.881200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.881219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.881227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.886425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.886445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.886454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.891736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.891756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.891767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.897019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.897039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.897046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.902280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.902299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.902307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.907506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.907525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.907533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.218 [2024-12-16 22:41:56.912784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.218 [2024-12-16 22:41:56.912805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.218 [2024-12-16 22:41:56.912813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.917575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.917596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.917604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.922737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.922760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.922768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.927894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.927915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.927924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.933106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.933127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.933135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.938406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.938442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.938450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.943565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.943586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.943594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.948882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.948904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.948911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.954760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.490 [2024-12-16 22:41:56.954782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.490 [2024-12-16 22:41:56.954790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.490 [2024-12-16 22:41:56.960030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.960051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.960058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.965387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.965408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.965415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.970677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.970698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.975906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.975926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.975933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.981234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.981254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.981262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.986417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.986437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.986445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.991727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.991752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.991759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:56.997281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:56.997302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:56.997311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.002839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.002860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.002867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.007491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.007511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.007519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.011269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.011290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.011298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.016610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.016629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.016637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.021802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.021822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.021830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.026919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.026938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.026949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.032041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.032060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.032069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.037272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.037291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.037299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.042418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.042438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.042446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.047590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.047610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.047618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.052482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.052511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.057967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.057988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.057996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.063653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.063673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.063681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.069022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.069043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.069051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.074249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.074269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.074277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.079684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.079704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.079712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.084931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.084950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.084958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.090140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.090160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.090167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.095360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.491 [2024-12-16 22:41:57.095380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.491 [2024-12-16 22:41:57.095388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.491 [2024-12-16 22:41:57.100804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.100825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.100833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.106212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.106233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.106240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.110979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.111000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.111008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.116609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.116630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.116643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.122641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.122662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.122670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.128032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.128052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.128060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.132651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.132672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.132680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.135598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.135617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.135625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.140871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.140890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.140897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.146092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.146112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.146120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.151198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.151218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.151226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.156461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.156481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.156489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.161662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.161685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.161693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.166708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.166728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.166736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.171965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.171984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.171992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.177271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.177291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.177298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.182880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.182900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.182908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.492 [2024-12-16 22:41:57.188288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.492 [2024-12-16 22:41:57.188308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.492 [2024-12-16 22:41:57.188317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.769 5626.00 IOPS, 703.25 MiB/s [2024-12-16T21:41:57.470Z] [2024-12-16 22:41:57.194187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.194214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.194222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.199496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.199516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.199524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.204837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.204861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.204870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.210223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.210244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.210252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.215538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.215558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.215566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.220892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.220912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.220920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.226206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.226226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.226234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.231809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.231829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.231838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.236976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.236996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.237003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.242325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.242353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.247463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.247483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.247491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.252610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.252630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.252641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.257844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.257864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.257871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.263068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.263086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.263093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.268253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.268272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.268280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.273398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.273417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.273424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.278619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.278639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.278646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.284178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.284203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.284211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.289645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.769 [2024-12-16 22:41:57.289665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-12-16 22:41:57.289672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.769 [2024-12-16 22:41:57.294950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.294970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.294977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.300174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.300199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.300207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.305492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.305512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.305519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.311431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.311451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.311459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.316555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.316574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.316582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.321008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.321027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.321035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.326255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.326275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.326282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.331819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.331839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.331846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.337208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.337228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.337237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.342884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.342904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.342915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.348255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.348275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.348282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.353579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.353599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.353607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.359019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.359039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.359048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.364484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.364504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.364511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.369940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.369959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.369967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.375183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.375209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.375217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.381074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.381094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.381102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.385543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.385562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.385570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.390697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.390719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.390727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.395845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.400977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.400996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.401004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.406041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.406060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.406068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.411212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.411232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.411240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.416296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.416316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.416324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.421468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.421487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.421495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.426629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.426648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.770 [2024-12-16 22:41:57.426657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.770 [2024-12-16 22:41:57.431724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.770 [2024-12-16 22:41:57.431742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.431750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.771 [2024-12-16 22:41:57.436861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.771 [2024-12-16 22:41:57.436881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.436888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.771 [2024-12-16 22:41:57.441998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.771 [2024-12-16 22:41:57.442017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:07.771 [2024-12-16 22:41:57.447159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.771 [2024-12-16 22:41:57.447179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.447186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:07.771 [2024-12-16 22:41:57.452278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.771 [2024-12-16 22:41:57.452297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.452305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:07.771 [2024-12-16 22:41:57.457478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.771 [2024-12-16 22:41:57.457497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.457506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:07.771 [2024-12-16 22:41:57.462664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:07.771 [2024-12-16 22:41:57.462684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.771 [2024-12-16 22:41:57.462693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.467792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.467812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.467820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.472982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.473003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.473011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.478135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.478155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.478167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.482992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.483014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.483022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.488262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.488282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.488291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.493462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.493483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.493491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.498668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.498689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.498697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.503886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.503906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.503914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.509077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.509097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.509105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.514236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.514256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.514264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.519353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.519373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.519381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.524465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.524485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.524503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.529266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.529286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.529294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.534368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.534389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.534397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.539531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.539551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.539559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.544754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.544774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.544782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.549930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.549950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.549958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.555129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.555156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.560319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.560346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.560355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.565457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.565478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.565489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.570566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.570586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.570593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.575714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.575734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.575742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.580879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.580899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.580906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.586110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.586130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.586138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.591309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.591330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.049 [2024-12-16 22:41:57.591338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.049 [2024-12-16 22:41:57.596482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.049 [2024-12-16 22:41:57.596503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.596511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.601561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.601581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.601590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.606747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.606767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.606775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.611899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.611924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.611932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.617097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.617116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.622227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.622247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.622256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.627523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.627544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.627552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.632672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.632693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.632701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.637871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.637893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.637901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.643039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.643060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.643068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.648382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.648403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.648412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.653640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.653661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.653669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.658841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.658862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.658870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.664075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.664095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.664103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.669233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.669253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.669261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.674444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.674464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.674472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.679552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.679572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.684671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.684692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.684700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.690619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.690640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.690647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.696475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.696496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.696504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.703409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.703430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.703441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.711400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.050 [2024-12-16 22:41:57.711422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.050 [2024-12-16 22:41:57.711430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.050 [2024-12-16 22:41:57.718841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.051 [2024-12-16 22:41:57.718863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.051 [2024-12-16 22:41:57.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.051 [2024-12-16 22:41:57.726119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.051 [2024-12-16 22:41:57.726140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.051 [2024-12-16 22:41:57.726148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.051 [2024-12-16 22:41:57.733553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.051 [2024-12-16 22:41:57.733574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.051 [2024-12-16 22:41:57.733583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.051 [2024-12-16 22:41:57.741372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.051 [2024-12-16 22:41:57.741394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.051 [2024-12-16 22:41:57.741402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.051 [2024-12-16 22:41:57.748515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.051 [2024-12-16 22:41:57.748538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.051 [2024-12-16 22:41:57.748546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.325 [2024-12-16 22:41:57.756982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.325 [2024-12-16 22:41:57.757005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.325 [2024-12-16 22:41:57.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.325 [2024-12-16 22:41:57.764994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.325 [2024-12-16 22:41:57.765018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.325 [2024-12-16 22:41:57.765027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.325 [2024-12-16 22:41:57.772563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.325 [2024-12-16 22:41:57.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.325 [2024-12-16 22:41:57.772593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.325 [2024-12-16 22:41:57.780823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.325 [2024-12-16 22:41:57.780845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.325 [2024-12-16 22:41:57.780854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.325 [2024-12-16 22:41:57.789026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.325 [2024-12-16 22:41:57.789048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.789056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.797304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.797326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.797334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.805351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.805374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.805382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.812391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.812414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.812423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.820450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.820472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.820481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.828187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.828215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.828225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.833597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.833619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.833633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.839113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.839135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.839143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.844433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.844454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.844462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.849688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.849709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.855063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.855085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.860542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.860572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.865788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.865817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.871011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.876297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.876317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.876325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.881546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.881572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.881579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.886755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.886776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.886784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.891986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.892007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.892015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.897204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.897225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.897232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.902395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.902417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.902425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.907599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.907620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.907628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.912886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.912908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.912916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.918132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.918161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.923341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.923361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.923369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.928426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.928447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.928456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.933656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.933677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.933685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.938905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.938926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.326 [2024-12-16 22:41:57.938934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.326 [2024-12-16 22:41:57.944096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.326 [2024-12-16 22:41:57.944116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.944125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.949295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.949317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.949325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.954537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.954558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.954566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.959756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.959778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.959787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.965120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.965142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.965150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.970469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.970491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.970502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.975752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.975773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.975782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.981395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.981417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.981425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.986673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.986695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.986703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.991871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.991893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.991901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:57.997137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:57.997158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:57.997166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:58.003249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:58.003270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:58.003279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:58.009579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:58.009601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:58.009610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:58.014856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:58.014877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:58.014886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.327 [2024-12-16 22:41:58.020088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.327 [2024-12-16 22:41:58.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.327 [2024-12-16 22:41:58.020125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.025365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.025386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.025395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.030641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.030662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.030671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.035874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.035894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.035902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.041071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.041092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.041100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.046213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.046233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.046240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.051402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.051422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.051430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.056593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.056613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.056621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.061693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.061713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.061721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.066829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.066849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.066856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.071938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.071959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.071967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.077049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.077068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.077077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.082200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.082221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.082228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.087326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.087347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.087355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.092452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.092472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.092480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.097565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.097585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.097592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.102726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.102747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.102754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.107887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.107911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.107919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.113065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.113086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.113094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.118226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.118246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.118254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.123352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.123372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.123380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.128477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.128496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.128504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.133541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.133561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.133569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.138654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.138675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.138682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.143746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.143766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.148296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.148316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.612 [2024-12-16 22:41:58.148324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.612 [2024-12-16 22:41:58.151475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.612 [2024-12-16 22:41:58.151494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.151502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.156606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.156625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.156632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.161507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.161526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.161534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.166386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.166406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.166414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.171187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.171213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.171221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.176051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.176070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.180967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.180986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.180994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.185841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.185860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.185868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:08.613 [2024-12-16 22:41:58.190772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.190791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.190802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.613 5665.50 IOPS, 708.19 MiB/s [2024-12-16T21:41:58.314Z] [2024-12-16 22:41:58.196252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2518c50) 00:36:08.613 [2024-12-16 22:41:58.196272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.613 [2024-12-16 22:41:58.196280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:08.613 00:36:08.613 Latency(us) 00:36:08.613 [2024-12-16T21:41:58.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.613 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:08.613 nvme0n1 : 2.00 5663.89 707.99 0.00 0.00 2821.69 659.26 8488.47 00:36:08.613 [2024-12-16T21:41:58.314Z] =================================================================================================================== 00:36:08.613 [2024-12-16T21:41:58.314Z] Total : 5663.89 707.99 0.00 0.00 2821.69 659.26 8488.47 00:36:08.613 { 00:36:08.613 "results": [ 00:36:08.613 { 00:36:08.613 "job": "nvme0n1", 00:36:08.613 "core_mask": "0x2", 00:36:08.613 "workload": "randread", 00:36:08.613 "status": "finished", 00:36:08.613 "queue_depth": 16, 00:36:08.613 "io_size": 131072, 00:36:08.613 "runtime": 2.003395, 00:36:08.613 "iops": 5663.885554271624, 00:36:08.613 "mibps": 707.985694283953, 00:36:08.613 "io_failed": 0, 00:36:08.613 "io_timeout": 0, 00:36:08.613 "avg_latency_us": 2821.690493228754, 00:36:08.613 "min_latency_us": 659.2609523809524, 00:36:08.613 "max_latency_us": 8488.47238095238 00:36:08.613 } 00:36:08.613 ], 00:36:08.613 "core_count": 1 00:36:08.613 } 00:36:08.613 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:08.613 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:08.613 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:08.613 | .driver_specific 00:36:08.613 | .nvme_error 00:36:08.613 | .status_code 00:36:08.613 | .command_transient_transport_error' 00:36:08.613 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 367 > 0 )) 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530213 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530213 ']' 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530213 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530213 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530213' 00:36:08.893 killing process with pid 530213 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530213 00:36:08.893 Received shutdown signal, test time was about 2.000000 seconds 00:36:08.893 00:36:08.893 Latency(us) 00:36:08.893 [2024-12-16T21:41:58.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.893 [2024-12-16T21:41:58.594Z] =================================================================================================================== 00:36:08.893 [2024-12-16T21:41:58.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:08.893 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530213 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530685 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530685 /var/tmp/bperf.sock 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530685 ']' 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:09.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.162 [2024-12-16 22:41:58.666032] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:09.162 [2024-12-16 22:41:58.666080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530685 ] 00:36:09.162 [2024-12-16 22:41:58.739947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.162 [2024-12-16 22:41:58.762555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:09.162 22:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:09.441 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:09.441 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.441 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.441 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.441 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:09.441 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:09.718 nvme0n1 00:36:09.718 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:09.718 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.718 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.718 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.718 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:09.718 22:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:09.978 Running I/O for 2 seconds... 00:36:09.978 [2024-12-16 22:41:59.447804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee2c28 00:36:09.978 [2024-12-16 22:41:59.448465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.448491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.456808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee6fa8 00:36:09.978 [2024-12-16 22:41:59.457476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.457496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.465931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5658 00:36:09.978 [2024-12-16 22:41:59.466544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.466564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.474743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efeb58 00:36:09.978 [2024-12-16 22:41:59.475372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.475391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.484785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb480 00:36:09.978 [2024-12-16 22:41:59.485913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.485931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.493082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4de8 00:36:09.978 [2024-12-16 22:41:59.493884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.493903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.501935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:09.978 [2024-12-16 22:41:59.502711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.502732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.511062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef2948 00:36:09.978 [2024-12-16 22:41:59.511649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.511668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.520137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeea00 00:36:09.978 [2024-12-16 22:41:59.521012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.521030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.529297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efdeb0 00:36:09.978 [2024-12-16 22:41:59.529992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.530011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.539455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef8a50 00:36:09.978 [2024-12-16 22:41:59.540973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.978 [2024-12-16 22:41:59.540992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:09.978 [2024-12-16 22:41:59.545779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef0bc0 00:36:09.979 [2024-12-16 22:41:59.546430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.546448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.555190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eec840 00:36:09.979 [2024-12-16 22:41:59.555957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.555975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.564497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef96f8 00:36:09.979 [2024-12-16 22:41:59.565378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.565396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.573822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eed0b0 00:36:09.979 [2024-12-16 22:41:59.574878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.574897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.582092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eed4e8 00:36:09.979 [2024-12-16 22:41:59.583011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.583030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.591291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef5378 00:36:09.979 [2024-12-16 22:41:59.592155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.592173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.600169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef3a28 00:36:09.979 [2024-12-16 22:41:59.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.601076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.609042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee6b70 00:36:09.979 [2024-12-16 22:41:59.609924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.609943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.618017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee27f0 00:36:09.979 [2024-12-16 22:41:59.618926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.618946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.627129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eddc00 00:36:09.979 [2024-12-16 22:41:59.628040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.628060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.636131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4578 00:36:09.979 [2024-12-16 22:41:59.636999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.637017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.645435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efe720 00:36:09.979 [2024-12-16 22:41:59.646426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.646445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.654919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeee38 00:36:09.979 [2024-12-16 22:41:59.656169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.656188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.663199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee84c0 00:36:09.979 [2024-12-16 22:41:59.664073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.664091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:09.979 [2024-12-16 22:41:59.672184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1710 00:36:09.979 [2024-12-16 22:41:59.673005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.979 [2024-12-16 22:41:59.673024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.682586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0630 00:36:10.239 [2024-12-16 22:41:59.684111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.239 [2024-12-16 22:41:59.684129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.689047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee3498 00:36:10.239 [2024-12-16 22:41:59.689723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.239 [2024-12-16 22:41:59.689741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.698052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee38d0 00:36:10.239 [2024-12-16 22:41:59.698864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.239 [2024-12-16 22:41:59.698883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.708146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:10.239 [2024-12-16 22:41:59.709038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.239 [2024-12-16 22:41:59.709056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.717017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edfdc0 00:36:10.239 [2024-12-16 22:41:59.717956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.239 [2024-12-16 22:41:59.717974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.725907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eebb98 00:36:10.239 [2024-12-16 22:41:59.726824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.239 [2024-12-16 22:41:59.726842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.239 [2024-12-16 22:41:59.734816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb8b8 00:36:10.240 [2024-12-16 22:41:59.735738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.735762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.743700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef96f8 00:36:10.240 [2024-12-16 22:41:59.744647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.744666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.752872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efa7d8 00:36:10.240 [2024-12-16 22:41:59.753783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.753802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.761755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeb328 00:36:10.240 [2024-12-16 22:41:59.762666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.762684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.770659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eea248 00:36:10.240 [2024-12-16 22:41:59.771598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.771616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.779702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee88f8 00:36:10.240 [2024-12-16 22:41:59.780624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.780642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.788608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeff18 00:36:10.240 [2024-12-16 22:41:59.789464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.789482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.797506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef8a50 00:36:10.240 [2024-12-16 22:41:59.798325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.798343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.806375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee73e0 00:36:10.240 [2024-12-16 22:41:59.807187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.807210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.814660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eec408 00:36:10.240 [2024-12-16 22:41:59.815561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.815580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.823659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eec840 00:36:10.240 [2024-12-16 22:41:59.824577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.824595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.832886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6458 00:36:10.240 [2024-12-16 22:41:59.833792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.833810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.843073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eee190 00:36:10.240 [2024-12-16 22:41:59.844448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.844465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.852399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee8088 00:36:10.240 [2024-12-16 22:41:59.853913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.853929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.859877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0a68 00:36:10.240 [2024-12-16 22:41:59.860746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.868784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb048 00:36:10.240 [2024-12-16 22:41:59.869828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.869846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.878080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efc560 00:36:10.240 [2024-12-16 22:41:59.879226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.879243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.887352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5a90 00:36:10.240 [2024-12-16 22:41:59.888615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.888632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.895537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef9b30 00:36:10.240 [2024-12-16 22:41:59.896757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.896776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.904518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb480 00:36:10.240 [2024-12-16 22:41:59.905466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.905484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.913529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efac10 00:36:10.240 [2024-12-16 22:41:59.914383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.914401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.921790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee6738 00:36:10.240 [2024-12-16 22:41:59.922712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.922729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.930803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef1430 00:36:10.240 [2024-12-16 22:41:59.931761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.240 [2024-12-16 22:41:59.931779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:10.240 [2024-12-16 22:41:59.940567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeb760 00:36:10.501 [2024-12-16 22:41:59.941779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.941798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:41:59.949776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edf118 00:36:10.501 [2024-12-16 22:41:59.951003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.951021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:41:59.958429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef8a50 00:36:10.501 [2024-12-16 22:41:59.959470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.959488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:41:59.967071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efda78 00:36:10.501 [2024-12-16 22:41:59.967665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.967687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:41:59.975571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4140 00:36:10.501 [2024-12-16 22:41:59.976657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.976675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:41:59.984558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef4298 00:36:10.501 [2024-12-16 22:41:59.985290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.985307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:41:59.992936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efac10 00:36:10.501 [2024-12-16 22:41:59.993630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:41:59.993648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.001779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6890 00:36:10.501 [2024-12-16 22:42:00.002634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.002654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.013926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eec840 00:36:10.501 [2024-12-16 22:42:00.015053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.015074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.023552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef8618 00:36:10.501 [2024-12-16 22:42:00.025396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.025416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.032739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef1ca0 00:36:10.501 [2024-12-16 22:42:00.033615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.033635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.041789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0a68 00:36:10.501 [2024-12-16 22:42:00.042665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.042685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.050860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eee5c8 00:36:10.501 [2024-12-16 22:42:00.051721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.051744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.059980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eee5c8 00:36:10.501 [2024-12-16 22:42:00.060862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.060881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.069600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee3498 00:36:10.501 [2024-12-16 22:42:00.070343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.070362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.078446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eea680 00:36:10.501 [2024-12-16 22:42:00.079622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.079641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.087731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edfdc0 00:36:10.501 [2024-12-16 22:42:00.088809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.088826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.096966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:10.501 [2024-12-16 22:42:00.098048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.098067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.105451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef46d0 00:36:10.501 [2024-12-16 22:42:00.105948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.105967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.115484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efef90 00:36:10.501 [2024-12-16 22:42:00.116260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.501 [2024-12-16 22:42:00.116283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:10.501 [2024-12-16 22:42:00.124519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee84c0 00:36:10.502 [2024-12-16 22:42:00.125393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.125412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.134971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4de8 00:36:10.502 [2024-12-16 22:42:00.136109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.136127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.142544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef7970 00:36:10.502 [2024-12-16 22:42:00.143181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.143205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.150941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee3498 00:36:10.502 [2024-12-16 22:42:00.151659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.151677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.160913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eefae0 00:36:10.502 [2024-12-16 22:42:00.161691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.161709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.169501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef35f0 00:36:10.502 [2024-12-16 22:42:00.170221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.170240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.178056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eef270 00:36:10.502 [2024-12-16 22:42:00.178772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.178790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.187072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee95a0 00:36:10.502 [2024-12-16 22:42:00.187821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.187840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:10.502 [2024-12-16 22:42:00.196630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee3d08 00:36:10.502 [2024-12-16 22:42:00.197372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.502 [2024-12-16 22:42:00.197391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.205202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef35f0 00:36:10.762 [2024-12-16 22:42:00.205945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.205965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.215603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eefae0 00:36:10.762 [2024-12-16 22:42:00.216576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.224383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee2c28 00:36:10.762 [2024-12-16 22:42:00.225354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.225372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.233741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef1430 00:36:10.762 [2024-12-16 22:42:00.234925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.234944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.243091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef1ca0 00:36:10.762 [2024-12-16 22:42:00.244304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.244324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.252015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5220 00:36:10.762 [2024-12-16 22:42:00.252980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.252999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.260140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4140 00:36:10.762 [2024-12-16 22:42:00.261082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.261100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.268922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef1430 00:36:10.762 [2024-12-16 22:42:00.269861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.269881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.277709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee7c50 00:36:10.762 [2024-12-16 22:42:00.278420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.762 [2024-12-16 22:42:00.278439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:10.762 [2024-12-16 22:42:00.286670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efeb58 00:36:10.763 [2024-12-16 22:42:00.287398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.287420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.294872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb480 00:36:10.763 [2024-12-16 22:42:00.295562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.295579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.304406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee6b70 00:36:10.763 [2024-12-16 22:42:00.305084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.305103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.312501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eecc78 00:36:10.763 [2024-12-16 22:42:00.313257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.313276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.321855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efbcf0 00:36:10.763 [2024-12-16 22:42:00.322696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.322715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.330835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:10.763 [2024-12-16 22:42:00.331405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.331424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.341545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeff18 00:36:10.763 [2024-12-16 22:42:00.342877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.342894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.347935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee8088 00:36:10.763 [2024-12-16 22:42:00.348540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.348558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.357457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efd208 00:36:10.763 [2024-12-16 22:42:00.358167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.358185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.366961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efe720 00:36:10.763 [2024-12-16 22:42:00.367642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.367661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.376061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeaab8 00:36:10.763 [2024-12-16 22:42:00.376807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.384236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edf550 00:36:10.763 [2024-12-16 22:42:00.384928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.384946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.393231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef46d0 00:36:10.763 [2024-12-16 22:42:00.393926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.393944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.403361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef7da8 00:36:10.763 [2024-12-16 22:42:00.404313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.411602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edf550 00:36:10.763 [2024-12-16 22:42:00.412563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.412581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.420806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeb328 00:36:10.763 [2024-12-16 22:42:00.421774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.421793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.431438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef0bc0 00:36:10.763 [2024-12-16 22:42:00.432854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.432871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:10.763 28074.00 IOPS, 109.66 MiB/s [2024-12-16T21:42:00.464Z] [2024-12-16 22:42:00.440411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4140 00:36:10.763 [2024-12-16 22:42:00.441180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.441208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.448587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef2510 00:36:10.763 [2024-12-16 22:42:00.449459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.449477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:10.763 [2024-12-16 22:42:00.457081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5a90 00:36:10.763 [2024-12-16 22:42:00.457674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:10.763 [2024-12-16 22:42:00.457692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.466481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4578 00:36:11.024 [2024-12-16 22:42:00.467196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.467216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.475633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efac10 00:36:11.024 [2024-12-16 22:42:00.476355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.476376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.484553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efac10 00:36:11.024 [2024-12-16 22:42:00.485252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.485270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.493697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb480 00:36:11.024 [2024-12-16 22:42:00.494500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.494520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.502845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eed4e8 00:36:11.024 [2024-12-16 22:42:00.503669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.503688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.512302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eee5c8 00:36:11.024 [2024-12-16 22:42:00.513378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.513397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.521630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef0bc0 00:36:11.024 [2024-12-16 22:42:00.522784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.522806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.530627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee88f8 00:36:11.024 [2024-12-16 22:42:00.531783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.531801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.539344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef2510 00:36:11.024 [2024-12-16 22:42:00.540493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.540512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.547556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efa3a0 00:36:11.024 [2024-12-16 22:42:00.548611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.548630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.556437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef8a50 00:36:11.024 [2024-12-16 22:42:00.557247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.557266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.565811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edf118 00:36:11.024 [2024-12-16 22:42:00.566902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.566921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.575388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef0788 00:36:11.024 [2024-12-16 22:42:00.576612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.576630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.583751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef96f8 00:36:11.024 [2024-12-16 22:42:00.584830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.584848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.592701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee3060 00:36:11.024 [2024-12-16 22:42:00.593534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.593552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.601742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5a90 00:36:11.024 [2024-12-16 22:42:00.602618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.602638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.610743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6020 00:36:11.024 [2024-12-16 22:42:00.611603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.611622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.619581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6020 00:36:11.024 [2024-12-16 22:42:00.620440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.620460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.628000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efa7d8 00:36:11.024 [2024-12-16 22:42:00.628850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.628869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.637144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef96f8 00:36:11.024 [2024-12-16 22:42:00.638061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.638080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.648520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeee38 00:36:11.024 [2024-12-16 22:42:00.649986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.024 [2024-12-16 22:42:00.650004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.024 [2024-12-16 22:42:00.655013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee8d30 00:36:11.025 [2024-12-16 22:42:00.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.655864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.664556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6890 00:36:11.025 [2024-12-16 22:42:00.665414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.665432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.674111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee7c50 00:36:11.025 [2024-12-16 22:42:00.675091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.675110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.683364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee12d8 00:36:11.025 [2024-12-16 22:42:00.684353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.684372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.692292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eea680 00:36:11.025 [2024-12-16 22:42:00.693275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.693293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.703402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efc128 00:36:11.025 [2024-12-16 22:42:00.704968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.704986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.710034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ede038 00:36:11.025 [2024-12-16 22:42:00.710907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.710925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.025 [2024-12-16 22:42:00.721345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef5378 00:36:11.025 [2024-12-16 22:42:00.722704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.025 [2024-12-16 22:42:00.722723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.729673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee84c0 00:36:11.285 [2024-12-16 22:42:00.731030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.731049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.737510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eebfd0 00:36:11.285 [2024-12-16 22:42:00.738240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.738259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.748636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef2510 00:36:11.285 [2024-12-16 22:42:00.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.749781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.757896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee7818 00:36:11.285 [2024-12-16 22:42:00.759150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.759173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.766299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eea248 00:36:11.285 [2024-12-16 22:42:00.767542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.767562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.775566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef9f68 00:36:11.285 [2024-12-16 22:42:00.776529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.776548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.784328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef4f40 00:36:11.285 [2024-12-16 22:42:00.785008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.785027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.793578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee12d8 00:36:11.285 [2024-12-16 22:42:00.794245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.794263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.801945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee6b70 00:36:11.285 [2024-12-16 22:42:00.802705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.802723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.813060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef4b08 00:36:11.285 [2024-12-16 22:42:00.814202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.814223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.822559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee27f0 00:36:11.285 [2024-12-16 22:42:00.823832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.285 [2024-12-16 22:42:00.823851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.285 [2024-12-16 22:42:00.829042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efac10 00:36:11.285 [2024-12-16 22:42:00.829693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.829712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.840116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef5be8 00:36:11.286 [2024-12-16 22:42:00.841139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.841158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.849513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efac10 00:36:11.286 [2024-12-16 22:42:00.850539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.850558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.858947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edf550 00:36:11.286 [2024-12-16 22:42:00.860230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.860250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.865346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eebfd0 00:36:11.286 [2024-12-16 22:42:00.865969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.865987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.874646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efb480 00:36:11.286 [2024-12-16 22:42:00.875398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.875416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.885459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0a68 00:36:11.286 [2024-12-16 22:42:00.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.886536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.892570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef1868 00:36:11.286 [2024-12-16 22:42:00.893130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.893149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.901965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eed4e8 00:36:11.286 [2024-12-16 22:42:00.902765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.902783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.911258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef9b30 00:36:11.286 [2024-12-16 22:42:00.912219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.912237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.922290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeee38 00:36:11.286 [2024-12-16 22:42:00.923772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.923800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.928649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6020 00:36:11.286 [2024-12-16 22:42:00.929357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.929375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.937050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee9e10 00:36:11.286 [2024-12-16 22:42:00.937721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.937740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.946348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efeb58 00:36:11.286 [2024-12-16 22:42:00.947125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.947143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.957300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eef270 00:36:11.286 [2024-12-16 22:42:00.958567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.958586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.965373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efe2e8 00:36:11.286 [2024-12-16 22:42:00.966648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.966667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.974524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:11.286 [2024-12-16 22:42:00.975529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.975547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.286 [2024-12-16 22:42:00.984011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee8d30 00:36:11.286 [2024-12-16 22:42:00.985016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.286 [2024-12-16 22:42:00.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:00.993188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee3d08 00:36:11.546 [2024-12-16 22:42:00.994205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:00.994226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.002576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeaab8 00:36:11.546 [2024-12-16 22:42:01.003848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.003867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.011595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eebfd0 00:36:11.546 [2024-12-16 22:42:01.012879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.012897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.019587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016edfdc0 00:36:11.546 [2024-12-16 22:42:01.020453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.020471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.027829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eec840 00:36:11.546 [2024-12-16 22:42:01.028763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.028781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.038843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eea248 00:36:11.546 [2024-12-16 22:42:01.040316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.040335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.045251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.546 [2024-12-16 22:42:01.046005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.046023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.546 [2024-12-16 22:42:01.054530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee7c50 00:36:11.546 [2024-12-16 22:42:01.055414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.546 [2024-12-16 22:42:01.055432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.063776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ede8a8 00:36:11.547 [2024-12-16 22:42:01.064724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.064743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.072461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efef90 00:36:11.547 [2024-12-16 22:42:01.073074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.073093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.083604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee99d8 00:36:11.547 [2024-12-16 22:42:01.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.085041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.092615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:11.547 [2024-12-16 22:42:01.094035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.094053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.101278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef0350 00:36:11.547 [2024-12-16 22:42:01.102697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.102715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.107721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef4b08 00:36:11.547 [2024-12-16 22:42:01.108431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.108450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.118646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee6738 00:36:11.547 [2024-12-16 22:42:01.119835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.119854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.127635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4de8 00:36:11.547 [2024-12-16 22:42:01.128890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.128908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.136587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee95a0 00:36:11.547 [2024-12-16 22:42:01.137818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.137836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.144939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efe2e8 00:36:11.547 [2024-12-16 22:42:01.145678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.145696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.153057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0630 00:36:11.547 [2024-12-16 22:42:01.153898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.163889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016efc998 00:36:11.547 [2024-12-16 22:42:01.165070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.165089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.172305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eeb328 00:36:11.547 [2024-12-16 22:42:01.173391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.173410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.181592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eedd58 00:36:11.547 [2024-12-16 22:42:01.182680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.182699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.190999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef5be8 00:36:11.547 [2024-12-16 22:42:01.192435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.192453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.200040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5ec8 00:36:11.547 [2024-12-16 22:42:01.201506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.201524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.207894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee99d8 00:36:11.547 [2024-12-16 22:42:01.208988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.209007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.217270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef7da8 00:36:11.547 [2024-12-16 22:42:01.218385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.218405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.226517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0630 00:36:11.547 [2024-12-16 22:42:01.227182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.227211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.237260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee38d0 00:36:11.547 [2024-12-16 22:42:01.238740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.238759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.547 [2024-12-16 22:42:01.243604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eea248 00:36:11.547 [2024-12-16 22:42:01.244267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.547 [2024-12-16 22:42:01.244285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.252823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee5220 00:36:11.808 [2024-12-16 22:42:01.253479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.253498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.261675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef6020 00:36:11.808 [2024-12-16 22:42:01.262302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.262320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.270598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee9168 00:36:11.808 [2024-12-16 22:42:01.271222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.271240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.278767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:11.808 [2024-12-16 22:42:01.279417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.279436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.288272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:11.808 [2024-12-16 22:42:01.288934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.288952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.298275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eff3c8 00:36:11.808 [2024-12-16 22:42:01.299360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.299378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.307636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee8d30 00:36:11.808 [2024-12-16 22:42:01.308845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.308863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.315782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef3a28 00:36:11.808 [2024-12-16 22:42:01.316870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.316888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.324624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016eef270 00:36:11.808 [2024-12-16 22:42:01.325585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.325604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.332632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee4578 00:36:11.808 [2024-12-16 22:42:01.333464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.333482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.341762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee8088 00:36:11.808 [2024-12-16 22:42:01.342621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.342640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.350308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef3e60 00:36:11.808 [2024-12-16 22:42:01.351071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.351090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.360508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ef0bc0 00:36:11.808 [2024-12-16 22:42:01.361292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.361310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.369576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.808 [2024-12-16 22:42:01.370675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.370693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.378439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.808 [2024-12-16 22:42:01.379531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.379550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.387281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.808 [2024-12-16 22:42:01.388380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.388398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.396084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.808 [2024-12-16 22:42:01.397196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.808 [2024-12-16 22:42:01.397214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.808 [2024-12-16 22:42:01.404939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.808 [2024-12-16 22:42:01.406033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.809 [2024-12-16 22:42:01.406052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.809 [2024-12-16 22:42:01.413775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee1b48 00:36:11.809 [2024-12-16 22:42:01.414767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.809 [2024-12-16 22:42:01.414786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.809 [2024-12-16 22:42:01.422016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee88f8 00:36:11.809 [2024-12-16 22:42:01.423098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.809 [2024-12-16 22:42:01.423116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.809 [2024-12-16 22:42:01.431398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b50e0) with pdu=0x200016ee0ea0 00:36:11.809 [2024-12-16 22:42:01.432600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.809 [2024-12-16 22:42:01.432618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.809 28247.00 IOPS, 110.34 MiB/s 00:36:11.809 Latency(us) 00:36:11.809 [2024-12-16T21:42:01.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.809 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:11.809 nvme0n1 : 2.00 28242.24 110.32 0.00 0.00 4526.48 1778.83 14667.58 00:36:11.809 [2024-12-16T21:42:01.510Z] =================================================================================================================== 00:36:11.809 [2024-12-16T21:42:01.510Z] Total : 28242.24 110.32 0.00 0.00 4526.48 1778.83 14667.58 00:36:11.809 { 00:36:11.809 "results": [ 00:36:11.809 { 00:36:11.809 "job": "nvme0n1", 00:36:11.809 "core_mask": "0x2", 00:36:11.809 "workload": "randwrite", 00:36:11.809 "status": "finished", 00:36:11.809 "queue_depth": 128, 00:36:11.809 "io_size": 4096, 00:36:11.809 "runtime": 2.003205, 00:36:11.809 "iops": 28242.241807503477, 00:36:11.809 "mibps": 110.32125706056046, 00:36:11.809 "io_failed": 0, 00:36:11.809 "io_timeout": 0, 00:36:11.809 "avg_latency_us": 4526.484764749701, 00:36:11.809 "min_latency_us": 1778.8342857142857, 00:36:11.809 "max_latency_us": 14667.580952380953 00:36:11.809 } 00:36:11.809 ], 00:36:11.809 "core_count": 1 00:36:11.809 } 00:36:11.809 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:11.809 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:11.809 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:11.809 | .driver_specific 00:36:11.809 | .nvme_error 00:36:11.809 | .status_code 00:36:11.809 | .command_transient_transport_error' 00:36:11.809 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530685 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530685 ']' 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530685 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530685 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530685' 00:36:12.069 killing process with pid 530685 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530685 00:36:12.069 Received shutdown signal, test time was about 2.000000 seconds 00:36:12.069 00:36:12.069 Latency(us) 00:36:12.069 [2024-12-16T21:42:01.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.069 [2024-12-16T21:42:01.770Z] =================================================================================================================== 00:36:12.069 [2024-12-16T21:42:01.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:12.069 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530685 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531273 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531273 /var/tmp/bperf.sock 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531273 ']' 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:12.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.328 22:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:12.328 [2024-12-16 22:42:01.907428] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:12.328 [2024-12-16 22:42:01.907474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531273 ] 00:36:12.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:12.328 Zero copy mechanism will not be used. 00:36:12.328 [2024-12-16 22:42:01.982812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.328 [2024-12-16 22:42:02.004397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.587 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.587 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:12.587 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:12.587 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:12.846 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:12.846 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.846 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:12.846 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.846 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.846 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.106 nvme0n1 00:36:13.106 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:13.106 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.106 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:13.106 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.106 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:13.106 22:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.106 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:13.106 Zero copy mechanism will not be used. 00:36:13.106 Running I/O for 2 seconds... 00:36:13.106 [2024-12-16 22:42:02.759507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.759583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.759611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.765323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.765391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.765416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.770035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.770110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.770131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.774750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.774865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.774883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.779463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.779580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.779599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.784468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.784609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.784630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.789878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.789970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.789988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.795230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.795384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.795403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.800678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.800827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.800846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.106 [2024-12-16 22:42:02.805950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.106 [2024-12-16 22:42:02.806015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.106 [2024-12-16 22:42:02.806034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.811273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.811332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.811351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.816822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.816899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.816917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.822330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.822471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.822489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.827360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.827422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.827440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.832519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.832603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.832621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.837889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.837942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.837960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.843273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.843369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.843387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.849604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.849660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.849678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.854705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.854805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.854823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.859930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.860008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.860027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.864989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.865044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.865062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.870336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.870405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.870423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.876002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.876106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.876123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.880897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.880966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.880984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.885579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.885686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.885704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.890317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.890402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.890420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.894760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.894835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.894852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.899441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.899550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.899574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.904110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.904168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.904186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.908738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.908808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.908826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.913358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.913418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.913436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.917746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.917803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.917821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.922450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.922504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.922522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.927425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.927594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.927611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.932189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.932269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.932287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.936742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.936840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.368 [2024-12-16 22:42:02.936858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.368 [2024-12-16 22:42:02.941900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.368 [2024-12-16 22:42:02.942038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.942056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.947320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.947399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.947418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.952438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.952509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.952527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.957040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.957124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.957142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.961661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.961733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.961751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.966202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.966280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.966298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.970696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.970773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.970792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.975088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.975158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.975175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.979776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.979831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.979849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.984507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.984646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.984663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.989729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.989796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.989815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:02.995080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:02.995185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:02.995208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.000197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.000266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.000284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.005488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.005588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.005606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.010251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.010345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.014879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.014954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.019336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.019400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.019418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.023850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.023976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.023998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.028556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.028629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.028646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.033211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.033267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.033285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.037635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.037707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.037725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.042306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.042378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.042396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.047124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.047221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.047239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.052741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.052810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.052828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.057508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.057578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.057596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.062144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.062219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.062237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.369 [2024-12-16 22:42:03.067175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.369 [2024-12-16 22:42:03.067279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.369 [2024-12-16 22:42:03.067297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.630 [2024-12-16 22:42:03.072556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.630 [2024-12-16 22:42:03.072611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.630 [2024-12-16 22:42:03.072630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.630 [2024-12-16 22:42:03.077658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.630 [2024-12-16 22:42:03.077735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.630 [2024-12-16 22:42:03.077753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.630 [2024-12-16 22:42:03.082827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.630 [2024-12-16 22:42:03.082953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.082971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.087784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.087913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.087930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.092987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.093066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.093084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.099439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.099592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.106592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.106738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.106756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.114213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.114333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.114351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.121891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.122057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.122075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.128941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.129092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.129110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.137041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.137175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.144311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.144432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.144450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.152139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.152274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.152292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.160165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.160302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.160321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.167694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.167873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.175555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.175688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.175706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.182623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.182794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.182816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.189585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.189726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.189744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.197143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.197252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.197269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.202720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.202798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.202816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.208083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.208234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.208252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.213319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.213471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.213489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.218554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.218635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.218653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.223385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.223555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.223573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.228421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.228589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.228607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.234652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.234808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.234826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.240459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.240560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.240578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.246586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.246742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.246760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.253237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.253413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.253431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.259350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.631 [2024-12-16 22:42:03.259485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.631 [2024-12-16 22:42:03.259503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.631 [2024-12-16 22:42:03.265828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.265998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.266015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.272169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.272356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.272374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.278360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.278562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.278580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.284765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.284938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.284956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.291228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.291360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.291378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.297607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.297768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.297786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.303947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.304052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.304069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.309609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.309963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.309982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.315978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.316277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.316298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.321943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.322308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.632 [2024-12-16 22:42:03.329024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.632 [2024-12-16 22:42:03.329379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.632 [2024-12-16 22:42:03.329398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.336661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.336938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.336957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.342109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.342223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.342245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.346569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.346828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.346848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.351030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.351291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.351310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.355414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.355666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.355685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.359820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.360066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.360085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.364295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.364552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.364571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.369039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.369300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.369319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.373696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.373952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.373971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.379058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.379335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.379354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.384225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.384481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.384500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.389373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.389631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.389650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.393885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.394147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.394166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.398266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.398521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.398540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.402584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.402844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.402863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.407188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.407436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.407454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.411884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.412126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.412145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.416326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.416598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.416617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.420846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.421103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.421122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.425339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.425582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.425602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.429652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.429891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.429910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.433945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.893 [2024-12-16 22:42:03.434187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.893 [2024-12-16 22:42:03.434214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.893 [2024-12-16 22:42:03.438253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.438493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.438513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.442517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.442776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.442795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.446779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.447026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.447046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.451003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.451265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.451284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.455251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.455492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.455511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.459457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.459713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.459736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.463662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.463936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.463955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.467940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.468214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.468233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.472136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.472413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.472432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.476357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.476626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.476645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.480832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.481082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.481101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.485337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.485606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.485625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.489646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.489908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.489928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.493850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.494115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.494134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.498048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.498316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.498335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.502262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.502523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.502542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.506500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.506759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.506778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.510767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.511033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.511052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.514975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.515244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.515263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.519231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.519482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.519502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.523439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.523705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.523725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.527678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.527932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.527952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.531891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.532142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.532161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.536141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.536399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.536418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.541378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.541632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.541651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.545751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.545995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.546014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.550101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.550356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.550375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.554595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.894 [2024-12-16 22:42:03.554848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.894 [2024-12-16 22:42:03.554867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.894 [2024-12-16 22:42:03.558858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.559118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.559136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.895 [2024-12-16 22:42:03.563441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.563685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.563703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.895 [2024-12-16 22:42:03.568408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.568658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.568677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:13.895 [2024-12-16 22:42:03.573463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.573710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.573732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:13.895 [2024-12-16 22:42:03.578525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.578774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.578793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:13.895 [2024-12-16 22:42:03.583183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.583433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.583452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:13.895 [2024-12-16 22:42:03.588740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:13.895 [2024-12-16 22:42:03.588992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:13.895 [2024-12-16 22:42:03.589011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.593884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.594139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.594158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.599095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.599350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.604036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.604289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.604308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.609278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.609536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.609555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.614418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.614678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.614697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.619553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.619815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.619835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.624835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.625076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.625096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.629860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.630123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.630143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.634868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.635108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.635127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.640029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.640293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.640313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.645446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.645693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.645713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.650463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.650718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.650737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.655165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.655430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.655449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.660520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.660776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.660795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.665707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.665968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.671262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.671493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.671512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.676350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.676594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.676613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.681240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.681484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.681503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.686117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.686365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.686384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.690998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.691253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.691273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.696421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.696655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.156 [2024-12-16 22:42:03.696674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.156 [2024-12-16 22:42:03.701572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.156 [2024-12-16 22:42:03.701838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.701858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.706408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.706637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.706659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.711249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.711486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.711505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.716335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.716595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.716614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.720914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.721170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.721189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.725475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.725739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.725758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.731210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.731490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.731509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.736834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.737089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.737108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.741528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.741773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.741792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.746046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.746299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.746318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.750528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.750790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.750810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.755033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.755297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.759516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.759767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 6017.00 IOPS, 752.12 MiB/s [2024-12-16T21:42:03.858Z] [2024-12-16 22:42:03.764954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.765216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.765235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.769347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.769584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.773617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.773870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.773890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.777904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.778148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.778169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.782210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.782470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.782489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.786975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.787271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.787290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.792738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.793112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.793132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.799120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.799464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.799484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.805188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.805531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.805551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.811347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.811689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.811708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.817842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.818181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.825240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.825609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.825628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.832611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.832948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.832967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.839428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.839690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.157 [2024-12-16 22:42:03.839709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.157 [2024-12-16 22:42:03.846766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.157 [2024-12-16 22:42:03.847107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.158 [2024-12-16 22:42:03.847131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.158 [2024-12-16 22:42:03.853656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.158 [2024-12-16 22:42:03.853981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.158 [2024-12-16 22:42:03.854000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.861009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.861351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.861371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.868159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.868456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.868476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.874980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.875339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.875359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.881825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.881935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.881953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.888312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.888645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.888664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.895444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.895752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.895772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.901439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.901685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.901705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.907214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.907478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.907497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.912339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.912595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.912614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.917209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.917473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.917493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.922312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.922553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.922572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.927707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.927967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.927986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.932563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.932818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.932837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.937149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.937413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.937432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.941502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.941760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.941779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.945882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.946127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.946146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.950456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.950709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.950728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.954831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.955099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.955117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.959133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.959405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.959424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.419 [2024-12-16 22:42:03.963416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.419 [2024-12-16 22:42:03.963698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.419 [2024-12-16 22:42:03.963717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.967663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.967927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.967946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.971881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.972121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.972140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.976158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.976407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.976426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.980703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.980969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.980988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.985608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.985877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.985899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.990517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.990780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.990799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:03.995319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:03.995566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:03.995586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.000443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.000690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.000710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.005627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.005879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.005899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.010707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.010973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.010995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.015671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.015940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.015960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.020688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.020941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.020961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.025862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.026103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.026123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.030888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.031150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.031170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.035898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.036166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.036185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.040799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.041038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.041058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.045685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.045941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.045960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.051114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.051370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.051389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.056295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.056545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.056565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.061035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.061305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.061324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.065967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.066231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.066250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.070870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.071122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.071141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.076629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.076872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.076891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.081600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.081843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.081862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.086529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.086795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.086814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.091475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.091739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.091758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.096320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.096593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.096611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.420 [2024-12-16 22:42:04.101116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.420 [2024-12-16 22:42:04.101358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.420 [2024-12-16 22:42:04.101377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.421 [2024-12-16 22:42:04.106152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.421 [2024-12-16 22:42:04.106411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.421 [2024-12-16 22:42:04.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.421 [2024-12-16 22:42:04.111185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.421 [2024-12-16 22:42:04.111405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.421 [2024-12-16 22:42:04.111425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.421 [2024-12-16 22:42:04.116211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.421 [2024-12-16 22:42:04.116447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.421 [2024-12-16 22:42:04.116470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.120903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.121170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.125749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.125999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.130764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.131033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.131053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.135906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.136167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.136186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.140392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.140653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.140673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.145020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.145287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.145306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.150283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.150526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.150546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.155498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.155760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.155779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.160385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.160630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.160649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.165095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.165373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.165394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.170072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.170318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.170338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.174598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.174841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.174860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.179266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.179524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.179544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.184144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.184406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.184425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.189205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.189457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.189476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.193675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.193938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.193957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.198098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.198355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.198374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.203248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.203508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.203527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.208187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.208457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.208475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.213061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.213336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.213355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.218209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.218472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.682 [2024-12-16 22:42:04.218492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.682 [2024-12-16 22:42:04.223369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.682 [2024-12-16 22:42:04.223619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.228614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.228874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.233407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.233650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.233669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.238348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.238601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.238620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.243083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.243338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.243362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.248155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.248431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.248450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.252684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.252947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.252966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.257033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.257290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.257309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.261259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.261515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.261534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.265669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.265917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.265936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.271408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.271680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.271700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.275707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.275946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.275966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.279944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.280208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.280227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.284147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.284406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.284424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.288376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.288631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.288650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.292556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.292821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.292839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.296841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.297096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.297115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.301083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.301355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.301385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.305673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.305968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.305987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.311874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.312159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.312178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.317563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.317837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.317856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.322656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.322903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.322924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.327582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.327838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.327857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.332499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.332762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.332781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.337406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.337654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.337673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.342358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.342598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.342617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.347237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.347487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.347506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.683 [2024-12-16 22:42:04.352054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.683 [2024-12-16 22:42:04.352354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.683 [2024-12-16 22:42:04.352373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.684 [2024-12-16 22:42:04.356922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.684 [2024-12-16 22:42:04.357165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.684 [2024-12-16 22:42:04.357185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.684 [2024-12-16 22:42:04.361696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.684 [2024-12-16 22:42:04.361957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.684 [2024-12-16 22:42:04.361976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.684 [2024-12-16 22:42:04.366707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.684 [2024-12-16 22:42:04.366949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.684 [2024-12-16 22:42:04.366971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.684 [2024-12-16 22:42:04.371526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.684 [2024-12-16 22:42:04.371776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.684 [2024-12-16 22:42:04.371795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.684 [2024-12-16 22:42:04.376422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.684 [2024-12-16 22:42:04.376677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.684 [2024-12-16 22:42:04.376696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.684 [2024-12-16 22:42:04.381232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.684 [2024-12-16 22:42:04.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.684 [2024-12-16 22:42:04.381500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.385943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.386206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.386225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.391109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.391384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.391403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.395937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.396188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.396214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.400821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.401087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.401106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.405768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.406057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.406076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.410735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.410991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.411011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.416012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.416301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.416321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.421651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.421907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.421927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.426458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.426701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.431509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.431791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.431810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.436652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.436911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.436931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.441540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.441802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.441821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.446959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.447232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.447251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.452514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.452770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.457558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.457804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.457824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.463005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.463278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.463297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.469287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.469534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.469554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.474294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.474532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.474551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.479168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.479425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.479445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.484364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.945 [2024-12-16 22:42:04.484624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.945 [2024-12-16 22:42:04.484644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.945 [2024-12-16 22:42:04.490326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.490648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.490668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.495903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.496158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.496177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.500446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.500706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.500728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.504973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.505229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.505248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.509440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.509692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.509711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.513935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.514183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.514209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.518423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.518665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.518685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.523017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.523279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.523297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.527550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.527792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.527811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.531977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.532225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.536456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.536713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.536732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.540818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.541084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.541103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.545474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.545724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.545743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.550081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.550357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.550377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.555299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.555411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.560273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.560530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.560549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.565486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.565742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.565761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.570319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.570576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.570595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.575292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.575546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.575565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.580230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.580476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.580496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.585366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.585622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.590842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.591101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.591121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.595994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.596261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.596279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.600970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.601235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.601254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.606160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.606422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.606441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.611276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.611377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.611395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.616459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.616713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.616734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.946 [2024-12-16 22:42:04.621898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.946 [2024-12-16 22:42:04.622148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.946 [2024-12-16 22:42:04.622167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:14.947 [2024-12-16 22:42:04.626738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.947 [2024-12-16 22:42:04.626985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.947 [2024-12-16 22:42:04.627008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:14.947 [2024-12-16 22:42:04.631443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.947 [2024-12-16 22:42:04.631699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.947 [2024-12-16 22:42:04.631718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:14.947 [2024-12-16 22:42:04.636260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.947 [2024-12-16 22:42:04.636509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.947 [2024-12-16 22:42:04.636527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:14.947 [2024-12-16 22:42:04.641339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:14.947 [2024-12-16 22:42:04.641593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:14.947 [2024-12-16 22:42:04.641612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.646486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.646731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.646750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.651958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.652215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.652234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.658569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.658882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.658901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.665403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.665659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.665678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.671545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.671897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.671917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.678358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.678672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.678692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.685104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.685462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.685481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.692083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.692420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.692440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.698851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.699183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.699208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.705239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.705559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.705578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.712865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.713104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.713123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.719340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.719665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.719685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.726476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.727022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.727041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.733844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.734204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.734224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.740718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.741037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.747566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.747902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.747922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.754419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.754743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.754763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:15.207 [2024-12-16 22:42:04.761041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23b55c0) with pdu=0x200016eff3c8 00:36:15.207 [2024-12-16 22:42:04.761387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.207 [2024-12-16 22:42:04.761407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:15.207 5986.00 IOPS, 748.25 MiB/s 00:36:15.207 Latency(us) 00:36:15.207 [2024-12-16T21:42:04.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.208 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:15.208 nvme0n1 : 2.00 5983.49 747.94 0.00 0.00 2669.28 1903.66 8675.72 00:36:15.208 [2024-12-16T21:42:04.909Z] =================================================================================================================== 00:36:15.208 [2024-12-16T21:42:04.909Z] Total : 5983.49 747.94 0.00 0.00 2669.28 1903.66 8675.72 00:36:15.208 { 00:36:15.208 "results": [ 00:36:15.208 { 00:36:15.208 "job": "nvme0n1", 00:36:15.208 "core_mask": "0x2", 00:36:15.208 "workload": "randwrite", 00:36:15.208 "status": "finished", 00:36:15.208 "queue_depth": 16, 00:36:15.208 "io_size": 131072, 00:36:15.208 "runtime": 2.003514, 00:36:15.208 "iops": 5983.487013317601, 00:36:15.208 "mibps": 747.9358766647001, 00:36:15.208 "io_failed": 0, 00:36:15.208 "io_timeout": 0, 00:36:15.208 "avg_latency_us": 2669.280358453692, 00:36:15.208 "min_latency_us": 1903.664761904762, 00:36:15.208 "max_latency_us": 8675.718095238095 00:36:15.208 } 00:36:15.208 ], 00:36:15.208 "core_count": 1 00:36:15.208 } 00:36:15.208 22:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:15.208 22:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:15.208 22:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:15.208 | .driver_specific 00:36:15.208 | .nvme_error 00:36:15.208 | .status_code 00:36:15.208 | .command_transient_transport_error' 00:36:15.208 22:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531273 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531273 ']' 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531273 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531273 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531273' 00:36:15.467 killing process with pid 531273 00:36:15.467 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531273 00:36:15.467 Received shutdown signal, test time was about 2.000000 seconds 00:36:15.467 00:36:15.467 Latency(us) 00:36:15.468 [2024-12-16T21:42:05.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.468 [2024-12-16T21:42:05.169Z] =================================================================================================================== 00:36:15.468 [2024-12-16T21:42:05.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:15.468 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531273 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 529514 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 529514 ']' 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 529514 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529514 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529514' 00:36:15.727 killing process with pid 529514 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 529514 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 529514 00:36:15.727 00:36:15.727 real 0m13.700s 00:36:15.727 user 0m26.217s 00:36:15.727 sys 0m4.503s 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.727 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:15.727 ************************************ 00:36:15.727 END TEST nvmf_digest_error 00:36:15.727 ************************************ 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:15.986 rmmod nvme_tcp 00:36:15.986 rmmod nvme_fabrics 00:36:15.986 rmmod nvme_keyring 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 529514 ']' 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 529514 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 529514 ']' 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 529514 00:36:15.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (529514) - No such process 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 529514 is not found' 00:36:15.986 Process with pid 529514 is not found 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.986 22:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.894 22:42:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:17.894 00:36:17.894 real 0m35.949s 00:36:17.894 user 0m54.619s 00:36:17.894 sys 0m13.623s 00:36:17.894 22:42:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.894 22:42:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.894 ************************************ 00:36:17.894 END TEST nvmf_digest 00:36:17.894 ************************************ 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.154 ************************************ 00:36:18.154 START TEST nvmf_bdevperf 00:36:18.154 ************************************ 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:18.154 * Looking for test storage... 00:36:18.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:18.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.154 --rc genhtml_branch_coverage=1 00:36:18.154 --rc genhtml_function_coverage=1 00:36:18.154 --rc genhtml_legend=1 00:36:18.154 --rc geninfo_all_blocks=1 00:36:18.154 --rc geninfo_unexecuted_blocks=1 00:36:18.154 00:36:18.154 ' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:18.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.154 --rc genhtml_branch_coverage=1 00:36:18.154 --rc genhtml_function_coverage=1 00:36:18.154 --rc genhtml_legend=1 00:36:18.154 --rc geninfo_all_blocks=1 00:36:18.154 --rc geninfo_unexecuted_blocks=1 00:36:18.154 00:36:18.154 ' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:18.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.154 --rc genhtml_branch_coverage=1 00:36:18.154 --rc genhtml_function_coverage=1 00:36:18.154 --rc genhtml_legend=1 00:36:18.154 --rc geninfo_all_blocks=1 00:36:18.154 --rc geninfo_unexecuted_blocks=1 00:36:18.154 00:36:18.154 ' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:18.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.154 --rc genhtml_branch_coverage=1 00:36:18.154 --rc genhtml_function_coverage=1 00:36:18.154 --rc genhtml_legend=1 00:36:18.154 --rc geninfo_all_blocks=1 00:36:18.154 --rc geninfo_unexecuted_blocks=1 00:36:18.154 00:36:18.154 ' 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.154 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.155 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.414 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:18.414 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:18.414 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.414 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:18.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:18.415 22:42:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:23.694 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.694 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:23.954 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:23.954 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:23.954 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:23.955 Found net devices under 0000:af:00.0: cvl_0_0 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:23.955 Found net devices under 0000:af:00.1: cvl_0_1 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.955 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:24.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:36:24.214 00:36:24.214 --- 10.0.0.2 ping statistics --- 00:36:24.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.214 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:24.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:36:24.214 00:36:24.214 --- 10.0.0.1 ping statistics --- 00:36:24.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.214 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=535749 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 535749 00:36:24.214 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:24.215 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 535749 ']' 00:36:24.215 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.215 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.215 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.215 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.215 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.215 [2024-12-16 22:42:13.775635] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:24.215 [2024-12-16 22:42:13.775686] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.215 [2024-12-16 22:42:13.852988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:24.215 [2024-12-16 22:42:13.876353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.215 [2024-12-16 22:42:13.876392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.215 [2024-12-16 22:42:13.876399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.215 [2024-12-16 22:42:13.876405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.215 [2024-12-16 22:42:13.876409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.215 [2024-12-16 22:42:13.877777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:24.215 [2024-12-16 22:42:13.877809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.215 [2024-12-16 22:42:13.877810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:24.474 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.474 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:24.474 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:24.474 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.474 22:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.474 [2024-12-16 22:42:14.009995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.474 Malloc0 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.474 [2024-12-16 22:42:14.072748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:24.474 { 00:36:24.474 "params": { 00:36:24.474 "name": "Nvme$subsystem", 00:36:24.474 "trtype": "$TEST_TRANSPORT", 00:36:24.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.474 "adrfam": "ipv4", 00:36:24.474 "trsvcid": "$NVMF_PORT", 00:36:24.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.474 "hdgst": ${hdgst:-false}, 00:36:24.474 "ddgst": ${ddgst:-false} 00:36:24.474 }, 00:36:24.474 "method": "bdev_nvme_attach_controller" 00:36:24.474 } 00:36:24.474 EOF 00:36:24.474 )") 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:24.474 22:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:24.474 "params": { 00:36:24.474 "name": "Nvme1", 00:36:24.474 "trtype": "tcp", 00:36:24.474 "traddr": "10.0.0.2", 00:36:24.474 "adrfam": "ipv4", 00:36:24.474 "trsvcid": "4420", 00:36:24.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:24.474 "hdgst": false, 00:36:24.474 "ddgst": false 00:36:24.474 }, 00:36:24.474 "method": "bdev_nvme_attach_controller" 00:36:24.474 }' 00:36:24.474 [2024-12-16 22:42:14.122983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:24.474 [2024-12-16 22:42:14.123024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535842 ] 00:36:24.733 [2024-12-16 22:42:14.194645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.734 [2024-12-16 22:42:14.217237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.992 Running I/O for 1 seconds... 00:36:25.928 10999.00 IOPS, 42.96 MiB/s 00:36:25.928 Latency(us) 00:36:25.928 [2024-12-16T21:42:15.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.928 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:25.928 Verification LBA range: start 0x0 length 0x4000 00:36:25.928 Nvme1n1 : 1.01 11090.37 43.32 0.00 0.00 11486.78 1326.32 17101.78 00:36:25.928 [2024-12-16T21:42:15.629Z] =================================================================================================================== 00:36:25.928 [2024-12-16T21:42:15.629Z] Total : 11090.37 43.32 0.00 0.00 11486.78 1326.32 17101.78 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=536074 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:26.187 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:26.187 { 00:36:26.187 "params": { 00:36:26.187 "name": "Nvme$subsystem", 00:36:26.188 "trtype": "$TEST_TRANSPORT", 00:36:26.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:26.188 "adrfam": "ipv4", 00:36:26.188 "trsvcid": "$NVMF_PORT", 00:36:26.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:26.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:26.188 "hdgst": ${hdgst:-false}, 00:36:26.188 "ddgst": ${ddgst:-false} 00:36:26.188 }, 00:36:26.188 "method": "bdev_nvme_attach_controller" 00:36:26.188 } 00:36:26.188 EOF 00:36:26.188 )") 00:36:26.188 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:26.188 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:26.188 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:26.188 22:42:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:26.188 "params": { 00:36:26.188 "name": "Nvme1", 00:36:26.188 "trtype": "tcp", 00:36:26.188 "traddr": "10.0.0.2", 00:36:26.188 "adrfam": "ipv4", 00:36:26.188 "trsvcid": "4420", 00:36:26.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:26.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:26.188 "hdgst": false, 00:36:26.188 "ddgst": false 00:36:26.188 }, 00:36:26.188 "method": "bdev_nvme_attach_controller" 00:36:26.188 }' 00:36:26.188 [2024-12-16 22:42:15.739362] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:26.188 [2024-12-16 22:42:15.739408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid536074 ] 00:36:26.188 [2024-12-16 22:42:15.812669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.188 [2024-12-16 22:42:15.833067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.446 Running I/O for 15 seconds... 00:36:28.761 11393.00 IOPS, 44.50 MiB/s [2024-12-16T21:42:18.724Z] 11437.50 IOPS, 44.68 MiB/s [2024-12-16T21:42:18.724Z] 22:42:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 535749 00:36:29.023 22:42:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:29.023 [2024-12-16 22:42:18.707836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.707989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.707998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.708005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.023 [2024-12-16 22:42:18.708013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.023 [2024-12-16 22:42:18.708020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.024 [2024-12-16 22:42:18.708639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.024 [2024-12-16 22:42:18.708647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.708991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.708997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.025 [2024-12-16 22:42:18.709156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.025 [2024-12-16 22:42:18.709340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.025 [2024-12-16 22:42:18.709347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:29.026 [2024-12-16 22:42:18.709876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:29.026 [2024-12-16 22:42:18.709890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.709898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a70cb0 is same with the state(6) to be set 00:36:29.026 [2024-12-16 22:42:18.709907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:29.026 [2024-12-16 22:42:18.709913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:29.026 [2024-12-16 22:42:18.709920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112000 len:8 PRP1 0x0 PRP2 0x0 00:36:29.026 [2024-12-16 22:42:18.709927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:29.026 [2024-12-16 22:42:18.712729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.026 [2024-12-16 22:42:18.712783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.026 [2024-12-16 22:42:18.713308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.027 [2024-12-16 22:42:18.713325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.027 [2024-12-16 22:42:18.713334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.027 [2024-12-16 22:42:18.713509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.027 [2024-12-16 22:42:18.713682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.027 [2024-12-16 22:42:18.713690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.027 [2024-12-16 22:42:18.713700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.027 [2024-12-16 22:42:18.713707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.287 [2024-12-16 22:42:18.725918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.726330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.726348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.726357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.726534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.726708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.726717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.726724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.726731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.738651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.739077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.739094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.739102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.739278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.739447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.739455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.739461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.739467] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.751600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.752023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.752041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.752048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.752246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.752416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.752424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.752430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.752437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.764499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.764927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.764973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.764997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.765455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.765624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.765636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.765642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.765648] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.777415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.777849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.777866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.777873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.778042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.778216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.778225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.778231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.778237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.790156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.790553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.790569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.790576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.790735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.790893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.790901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.790907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.790913] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.803038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.803459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.803475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.803483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.803655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.803824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.803833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.803839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.803848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.815928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.816351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.816397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.816420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.817004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.817602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.817628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.817649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.817674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.831102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.831600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.831622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.831633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.831887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.832142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.832153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.832162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.832171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.288 [2024-12-16 22:42:18.844024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.288 [2024-12-16 22:42:18.844459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.288 [2024-12-16 22:42:18.844475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.288 [2024-12-16 22:42:18.844483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.288 [2024-12-16 22:42:18.844651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.288 [2024-12-16 22:42:18.844818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.288 [2024-12-16 22:42:18.844826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.288 [2024-12-16 22:42:18.844833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.288 [2024-12-16 22:42:18.844839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.856822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.857262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.857278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.857286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.857454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.857622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.857630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.857636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.857642] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.869630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.870059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.870075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.870082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.870255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.870423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.870431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.870438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.870444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.882472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.882896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.882912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.882919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.883087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.883261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.883269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.883276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.883282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.895265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.895704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.895720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.895727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.895899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.896070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.896079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.896085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.896091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.908031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.908444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.908461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.908468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.908637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.908804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.908812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.908818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.908824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.920845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.921256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.921272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.921279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.921438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.921596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.921604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.921610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.921616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.933604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.934032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.934075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.934098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.934691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.934866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.934878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.934884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.934890] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.946494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.946922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.946938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.946945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.947113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.947305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.947313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.947320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.947326] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.959248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.959664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.959681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.959688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.959847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.960006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.960014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.960020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.960026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.972288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.972713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.289 [2024-12-16 22:42:18.972728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.289 [2024-12-16 22:42:18.972736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.289 [2024-12-16 22:42:18.972910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.289 [2024-12-16 22:42:18.973083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.289 [2024-12-16 22:42:18.973092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.289 [2024-12-16 22:42:18.973099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.289 [2024-12-16 22:42:18.973108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.289 [2024-12-16 22:42:18.985330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.289 [2024-12-16 22:42:18.985795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.290 [2024-12-16 22:42:18.985812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.290 [2024-12-16 22:42:18.985819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.290 [2024-12-16 22:42:18.985992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.290 [2024-12-16 22:42:18.986165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.290 [2024-12-16 22:42:18.986174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.290 [2024-12-16 22:42:18.986180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.290 [2024-12-16 22:42:18.986186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.549 [2024-12-16 22:42:18.998319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.549 [2024-12-16 22:42:18.998743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.549 [2024-12-16 22:42:18.998759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.549 [2024-12-16 22:42:18.998766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.549 [2024-12-16 22:42:18.998948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.549 [2024-12-16 22:42:18.999116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.549 [2024-12-16 22:42:18.999124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.549 [2024-12-16 22:42:18.999130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.549 [2024-12-16 22:42:18.999136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.549 [2024-12-16 22:42:19.011064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.549 [2024-12-16 22:42:19.011421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.549 [2024-12-16 22:42:19.011438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.549 [2024-12-16 22:42:19.011446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.549 [2024-12-16 22:42:19.011614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.549 [2024-12-16 22:42:19.011782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.549 [2024-12-16 22:42:19.011790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.549 [2024-12-16 22:42:19.011796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.549 [2024-12-16 22:42:19.011802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.549 [2024-12-16 22:42:19.023795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.549 [2024-12-16 22:42:19.024188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.549 [2024-12-16 22:42:19.024208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.549 [2024-12-16 22:42:19.024215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.549 [2024-12-16 22:42:19.024374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.549 [2024-12-16 22:42:19.024533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.549 [2024-12-16 22:42:19.024540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.549 [2024-12-16 22:42:19.024547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.549 [2024-12-16 22:42:19.024552] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.549 [2024-12-16 22:42:19.036639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.549 [2024-12-16 22:42:19.037060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.549 [2024-12-16 22:42:19.037104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.549 [2024-12-16 22:42:19.037128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.549 [2024-12-16 22:42:19.037737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.549 [2024-12-16 22:42:19.038202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.549 [2024-12-16 22:42:19.038211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.549 [2024-12-16 22:42:19.038217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.549 [2024-12-16 22:42:19.038223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.549 10120.67 IOPS, 39.53 MiB/s [2024-12-16T21:42:19.250Z] [2024-12-16 22:42:19.049465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.549 [2024-12-16 22:42:19.049793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.049808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.049815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.049974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.050133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.050140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.050146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.050152] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.062202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.062640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.062655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.062663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.062834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.063001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.063010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.063016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.063022] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.074938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.075257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.075273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.075280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.075438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.075597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.075605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.075611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.075617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.087750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.088160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.088175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.088182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.088388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.088561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.088569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.088575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.088581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.100588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.101049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.101094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.101116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.101716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.102314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.102349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.102380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.102387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.113391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.113833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.113849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.113856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.114015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.114174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.114181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.114187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.114199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.126210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.126621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.126637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.126644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.126803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.126961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.126969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.126975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.126980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.139038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.139448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.139464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.139472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.139640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.139807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.139816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.139822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.139831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.151876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.152282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.152299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.152306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.152476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.152651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.152662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.152669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.152675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.164759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.165184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.550 [2024-12-16 22:42:19.165241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.550 [2024-12-16 22:42:19.165265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.550 [2024-12-16 22:42:19.165734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.550 [2024-12-16 22:42:19.165903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.550 [2024-12-16 22:42:19.165911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.550 [2024-12-16 22:42:19.165917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.550 [2024-12-16 22:42:19.165923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.550 [2024-12-16 22:42:19.177546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.550 [2024-12-16 22:42:19.177883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.551 [2024-12-16 22:42:19.177898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.551 [2024-12-16 22:42:19.177906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.551 [2024-12-16 22:42:19.178073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.551 [2024-12-16 22:42:19.178247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.551 [2024-12-16 22:42:19.178256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.551 [2024-12-16 22:42:19.178262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.551 [2024-12-16 22:42:19.178268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.551 [2024-12-16 22:42:19.190601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.551 [2024-12-16 22:42:19.190986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.551 [2024-12-16 22:42:19.191002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.551 [2024-12-16 22:42:19.191009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.551 [2024-12-16 22:42:19.191183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.551 [2024-12-16 22:42:19.191365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.551 [2024-12-16 22:42:19.191375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.551 [2024-12-16 22:42:19.191381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.551 [2024-12-16 22:42:19.191387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.551 [2024-12-16 22:42:19.203779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.551 [2024-12-16 22:42:19.204149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.551 [2024-12-16 22:42:19.204208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.551 [2024-12-16 22:42:19.204233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.551 [2024-12-16 22:42:19.204736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.551 [2024-12-16 22:42:19.204904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.551 [2024-12-16 22:42:19.204913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.551 [2024-12-16 22:42:19.204919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.551 [2024-12-16 22:42:19.204925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.551 [2024-12-16 22:42:19.216661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.551 [2024-12-16 22:42:19.217009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.551 [2024-12-16 22:42:19.217025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.551 [2024-12-16 22:42:19.217032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.551 [2024-12-16 22:42:19.217209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.551 [2024-12-16 22:42:19.217377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.551 [2024-12-16 22:42:19.217386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.551 [2024-12-16 22:42:19.217392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.551 [2024-12-16 22:42:19.217398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.551 [2024-12-16 22:42:19.229767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.551 [2024-12-16 22:42:19.230170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.551 [2024-12-16 22:42:19.230186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.551 [2024-12-16 22:42:19.230200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.551 [2024-12-16 22:42:19.230376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.551 [2024-12-16 22:42:19.230549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.551 [2024-12-16 22:42:19.230557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.551 [2024-12-16 22:42:19.230563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.551 [2024-12-16 22:42:19.230569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.551 [2024-12-16 22:42:19.242843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.551 [2024-12-16 22:42:19.243244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.551 [2024-12-16 22:42:19.243261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.551 [2024-12-16 22:42:19.243268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.551 [2024-12-16 22:42:19.243442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.551 [2024-12-16 22:42:19.243615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.551 [2024-12-16 22:42:19.243624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.551 [2024-12-16 22:42:19.243630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.551 [2024-12-16 22:42:19.243636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.811 [2024-12-16 22:42:19.255888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.811 [2024-12-16 22:42:19.256253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.811 [2024-12-16 22:42:19.256270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.811 [2024-12-16 22:42:19.256278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.811 [2024-12-16 22:42:19.256461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.811 [2024-12-16 22:42:19.256646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.811 [2024-12-16 22:42:19.256655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.811 [2024-12-16 22:42:19.256661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.811 [2024-12-16 22:42:19.256668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.811 [2024-12-16 22:42:19.268990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.811 [2024-12-16 22:42:19.269430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.811 [2024-12-16 22:42:19.269447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.811 [2024-12-16 22:42:19.269455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.811 [2024-12-16 22:42:19.269639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.811 [2024-12-16 22:42:19.269822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.811 [2024-12-16 22:42:19.269834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.811 [2024-12-16 22:42:19.269840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.811 [2024-12-16 22:42:19.269847] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.811 [2024-12-16 22:42:19.282277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.811 [2024-12-16 22:42:19.282733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.282750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.282758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.282953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.283149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.283158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.283166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.283173] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.295441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.295878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.295895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.295903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.296086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.296276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.296285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.296292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.296298] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.308476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.308916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.308932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.308939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.309112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.309291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.309299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.309305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.309317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.321691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.322046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.322063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.322071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.322261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.322444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.322453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.322460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.322466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.334886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.335326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.335343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.335351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.335534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.335718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.335727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.335735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.335741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.348139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.348512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.348529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.348537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.348721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.348929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.348938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.348945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.348952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.361403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.361852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.361868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.361876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.362060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.362252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.362261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.362268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.362274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.374596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.375011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.375028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.375036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.375226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.375410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.375419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.375425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.375431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.387666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.388067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.388084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.388091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.388270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.388444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.388452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.388459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.388465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.400717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.401065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.401109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.812 [2024-12-16 22:42:19.401133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.812 [2024-12-16 22:42:19.401740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.812 [2024-12-16 22:42:19.402218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.812 [2024-12-16 22:42:19.402226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.812 [2024-12-16 22:42:19.402233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.812 [2024-12-16 22:42:19.402239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.812 [2024-12-16 22:42:19.413771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.812 [2024-12-16 22:42:19.414099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.812 [2024-12-16 22:42:19.414115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.414123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.414302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.414483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.414491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.414497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.414503] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.426713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.427031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.427047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.427055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.427233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.427407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.427415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.427422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.427428] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.439646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.439998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.440015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.440022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.440196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.440365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.440376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.440383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.440389] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.452537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.452862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.452878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.452885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.453044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.453208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.453232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.453238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.453245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.465413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.465748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.465764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.465771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.465939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.466107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.466115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.466121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.466127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.478281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.478653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.478669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.478676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.478844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.479012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.479021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.479027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.479036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.491231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.491621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.491637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.491645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.491813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.491980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.491989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.491995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.492001] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.813 [2024-12-16 22:42:19.504223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.813 [2024-12-16 22:42:19.504569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.813 [2024-12-16 22:42:19.504584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:29.813 [2024-12-16 22:42:19.504591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:29.813 [2024-12-16 22:42:19.504759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:29.813 [2024-12-16 22:42:19.504927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.813 [2024-12-16 22:42:19.504935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.813 [2024-12-16 22:42:19.504941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.813 [2024-12-16 22:42:19.504947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.517234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.517513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.517529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.517536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.517710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.517882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.517892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.517900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.517906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.530304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.530694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.530711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.530718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.530891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.531066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.531074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.531080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.531087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.543307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.543635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.543651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.543658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.543831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.544003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.544011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.544017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.544023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.556337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.556692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.556708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.556715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.556882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.557050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.557058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.557064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.557070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.569324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.569603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.569619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.569627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.569798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.569966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.569974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.569980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.569986] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.582076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.582473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.582490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.582498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.582667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.582839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.582847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.582854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.582860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.594990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.595363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.595379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.595387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.595555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.074 [2024-12-16 22:42:19.595722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.074 [2024-12-16 22:42:19.595730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.074 [2024-12-16 22:42:19.595737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.074 [2024-12-16 22:42:19.595743] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.074 [2024-12-16 22:42:19.607791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.074 [2024-12-16 22:42:19.608235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.074 [2024-12-16 22:42:19.608268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.074 [2024-12-16 22:42:19.608295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.074 [2024-12-16 22:42:19.608879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.609477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.609511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.609533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.609554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.620581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.621034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.621078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.621102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.621634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.621803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.621811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.621818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.621824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.633483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.633931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.633976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.633999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.634602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.634802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.634810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.634817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.634823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.646586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.646993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.647009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.647016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.647190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.647370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.647378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.647385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.647395] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.659384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.659827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.659871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.659895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.660433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.660602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.660610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.660616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.660623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.672269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.672617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.672633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.672640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.672808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.672976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.672984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.672990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.672996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.685127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.685563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.685580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.685587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.685756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.685924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.685932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.685938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.685944] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.697992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.698421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.698465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.698488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.698966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.699126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.699134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.699140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.699145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.710824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.711241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.711256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.711263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.711423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.711582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.711590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.711597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.711602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.723570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.724016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.724060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.724083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.724646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.724815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.075 [2024-12-16 22:42:19.724823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.075 [2024-12-16 22:42:19.724829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.075 [2024-12-16 22:42:19.724835] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.075 [2024-12-16 22:42:19.736436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.075 [2024-12-16 22:42:19.736831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.075 [2024-12-16 22:42:19.736848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.075 [2024-12-16 22:42:19.736854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.075 [2024-12-16 22:42:19.737017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.075 [2024-12-16 22:42:19.737176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.076 [2024-12-16 22:42:19.737184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.076 [2024-12-16 22:42:19.737198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.076 [2024-12-16 22:42:19.737204] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.076 [2024-12-16 22:42:19.749315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.076 [2024-12-16 22:42:19.749746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.076 [2024-12-16 22:42:19.749762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.076 [2024-12-16 22:42:19.749769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.076 [2024-12-16 22:42:19.749936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.076 [2024-12-16 22:42:19.750103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.076 [2024-12-16 22:42:19.750111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.076 [2024-12-16 22:42:19.750118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.076 [2024-12-16 22:42:19.750124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.076 [2024-12-16 22:42:19.762080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.076 [2024-12-16 22:42:19.762529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.076 [2024-12-16 22:42:19.762575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.076 [2024-12-16 22:42:19.762600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.076 [2024-12-16 22:42:19.763024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.076 [2024-12-16 22:42:19.763198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.076 [2024-12-16 22:42:19.763207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.076 [2024-12-16 22:42:19.763214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.076 [2024-12-16 22:42:19.763237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.775091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.775524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.775541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.775549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.775722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.775894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.775906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.775912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.775918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.787922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.788336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.788352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.788359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.788518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.788677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.788685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.788691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.788697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.800799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.801161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.801176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.801184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.801363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.801543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.801551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.801557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.801563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.813656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.814071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.814086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.814093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.814277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.814445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.814453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.814459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.814468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.826610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.826970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.826986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.826993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.827161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.827335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.827343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.827350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.827356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.839445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.839880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.839914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.839940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.840541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.841113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.841121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.841127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.841134] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.852176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.852510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.852525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.852532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.852692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.852850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.852858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.852864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.852870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.864940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.865385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.865400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.865407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.865579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.865738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.865746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.865752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.865757] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.877806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.878216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.878231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.337 [2024-12-16 22:42:19.878238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.337 [2024-12-16 22:42:19.878396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.337 [2024-12-16 22:42:19.878555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.337 [2024-12-16 22:42:19.878563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.337 [2024-12-16 22:42:19.878569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.337 [2024-12-16 22:42:19.878575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.337 [2024-12-16 22:42:19.890662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.337 [2024-12-16 22:42:19.891051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.337 [2024-12-16 22:42:19.891066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.891074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.891255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.891423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.891431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.891437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.891443] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.903513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.903835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.903881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.903905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.904429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.904599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.904607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.904613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.904620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.916239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.916678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.916694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.916701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.916877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.917036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.917044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.917049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.917055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.928984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.929397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.929414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.929421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.929589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.929757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.929765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.929771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.929777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.941834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.942202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.942247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.942271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.942854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.943386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.943400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.943406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.943413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.954683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.955113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.955180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.955781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.956385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.956393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.956400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.956406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.967522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.967909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.967924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.967931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.968090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.968272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.968281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.968287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.968294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.980324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.980738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.980753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.980760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.980919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.981077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.981085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.981090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.981099] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:19.993148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:19.993479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:19.993495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:19.993502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:19.993661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:19.993820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:19.993829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:19.993835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:19.993840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:20.006482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:20.006935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:20.006953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:20.006962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.338 [2024-12-16 22:42:20.007147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.338 [2024-12-16 22:42:20.007338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.338 [2024-12-16 22:42:20.007349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.338 [2024-12-16 22:42:20.007357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.338 [2024-12-16 22:42:20.007363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.338 [2024-12-16 22:42:20.019487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.338 [2024-12-16 22:42:20.019865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.338 [2024-12-16 22:42:20.019882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.338 [2024-12-16 22:42:20.019889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.339 [2024-12-16 22:42:20.020063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.339 [2024-12-16 22:42:20.020245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.339 [2024-12-16 22:42:20.020254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.339 [2024-12-16 22:42:20.020261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.339 [2024-12-16 22:42:20.020268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.339 [2024-12-16 22:42:20.032761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.339 [2024-12-16 22:42:20.033130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.339 [2024-12-16 22:42:20.033148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.339 [2024-12-16 22:42:20.033157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.339 [2024-12-16 22:42:20.033351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.339 [2024-12-16 22:42:20.033524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.339 [2024-12-16 22:42:20.033533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.339 [2024-12-16 22:42:20.033539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.339 [2024-12-16 22:42:20.033546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.599 7590.50 IOPS, 29.65 MiB/s [2024-12-16T21:42:20.300Z] [2024-12-16 22:42:20.046107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.599 [2024-12-16 22:42:20.046447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.599 [2024-12-16 22:42:20.046464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.599 [2024-12-16 22:42:20.046472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.599 [2024-12-16 22:42:20.046639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.599 [2024-12-16 22:42:20.046808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.599 [2024-12-16 22:42:20.046816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.599 [2024-12-16 22:42:20.046822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.599 [2024-12-16 22:42:20.046828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.599 [2024-12-16 22:42:20.058975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.599 [2024-12-16 22:42:20.059268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.599 [2024-12-16 22:42:20.059285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.599 [2024-12-16 22:42:20.059293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.599 [2024-12-16 22:42:20.059466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.599 [2024-12-16 22:42:20.059638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.599 [2024-12-16 22:42:20.059646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.599 [2024-12-16 22:42:20.059653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.599 [2024-12-16 22:42:20.059659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.599 [2024-12-16 22:42:20.071808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.599 [2024-12-16 22:42:20.072243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.599 [2024-12-16 22:42:20.072260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.599 [2024-12-16 22:42:20.072268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.599 [2024-12-16 22:42:20.072439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.599 [2024-12-16 22:42:20.072607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.599 [2024-12-16 22:42:20.072615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.599 [2024-12-16 22:42:20.072621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.072627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.084801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.085241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.085248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.085416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.085584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.085593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.085599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.085605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.097779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.098135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.098142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.098316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.098484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.098492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.098498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.098504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.110643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.111010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.111054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.111077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.111621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.111989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.112010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.112023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.112036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.125099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.125620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.125640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.125650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.125895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.126139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.126150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.126159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.126168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.138042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.138496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.138541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.138563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.139054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.139235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.139244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.139250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.139256] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.150874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.151201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.151217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.151241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.151413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.151587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.151595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.151602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.151611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.163891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.164317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.164333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.164341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.164521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.164690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.164699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.164705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.164711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.176846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.177257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.177274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.177281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.177449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.177617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.177625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.177631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.177637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.189689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.190087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.190130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.190153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.190737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.190906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.190914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.190920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.190926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.600 [2024-12-16 22:42:20.202662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.600 [2024-12-16 22:42:20.203078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.600 [2024-12-16 22:42:20.203093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.600 [2024-12-16 22:42:20.203100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.600 [2024-12-16 22:42:20.203274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.600 [2024-12-16 22:42:20.203443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.600 [2024-12-16 22:42:20.203451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.600 [2024-12-16 22:42:20.203457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.600 [2024-12-16 22:42:20.203463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.215582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.216003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.216019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.216026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.216202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.216370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.216378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.216384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.216390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.228598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.228956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.228972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.228979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.229147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.229322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.229330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.229337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.229343] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.241497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.241921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.241938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.241945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.242116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.242291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.242300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.242306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.242312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.254501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.254921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.254937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.254945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.255112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.255286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.255295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.255302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.255308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.267472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.267874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.267890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.267897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.268065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.268238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.268247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.268253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.268259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.280450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.280847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.280871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.281353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.281521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.281532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.281539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.281545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.601 [2024-12-16 22:42:20.293321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.601 [2024-12-16 22:42:20.293733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.601 [2024-12-16 22:42:20.293749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.601 [2024-12-16 22:42:20.293756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.601 [2024-12-16 22:42:20.293923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.601 [2024-12-16 22:42:20.294090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.601 [2024-12-16 22:42:20.294099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.601 [2024-12-16 22:42:20.294105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.601 [2024-12-16 22:42:20.294111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.306269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.306671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.306687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.306694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.306868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.307040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.307049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.307055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.307061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.319264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.319680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.319697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.319706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.319877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.320045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.320054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.320061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.320072] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.332227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.332634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.332650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.332657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.332826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.332994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.333002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.333008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.333014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.345073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.345550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.345573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.346120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.346301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.346309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.346316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.346323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.358059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.358492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.358499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.358667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.358835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.358843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.358849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.358855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.371033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.371467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.371483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.371490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.371658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.371826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.371834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.371840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.371846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.383947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.862 [2024-12-16 22:42:20.384387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.862 [2024-12-16 22:42:20.384403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.862 [2024-12-16 22:42:20.384410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.862 [2024-12-16 22:42:20.384577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.862 [2024-12-16 22:42:20.384745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.862 [2024-12-16 22:42:20.384753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.862 [2024-12-16 22:42:20.384760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.862 [2024-12-16 22:42:20.384765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.862 [2024-12-16 22:42:20.396937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.397341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.397358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.397366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.397534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.397702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.397710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.397716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.397722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.409922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.410323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.410339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.410346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.410517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.410684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.410693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.410699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.410705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.422897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.423315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.423361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.423384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.423799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.423968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.423976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.423982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.423988] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.435813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.436139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.436205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.436229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.436727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.436896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.436904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.436911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.436916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.448784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.449186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.449207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.449215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.449383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.449551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.449562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.449568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.449575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.461724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.462137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.462153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.462160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.462334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.462503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.462511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.462517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.462523] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.474659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.475079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.475123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.475146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.475649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.475819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.475827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.475833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.475839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.487690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.488087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.488103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.488110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.488282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.488451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.488459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.488465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.488474] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.500643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.501092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.501135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.501158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.501660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.501829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.501837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.501843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.501849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.513539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.863 [2024-12-16 22:42:20.513945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.863 [2024-12-16 22:42:20.513962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.863 [2024-12-16 22:42:20.513969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.863 [2024-12-16 22:42:20.514137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.863 [2024-12-16 22:42:20.514311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.863 [2024-12-16 22:42:20.514320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.863 [2024-12-16 22:42:20.514326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.863 [2024-12-16 22:42:20.514334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.863 [2024-12-16 22:42:20.526491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.864 [2024-12-16 22:42:20.526922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.864 [2024-12-16 22:42:20.526965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.864 [2024-12-16 22:42:20.526988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.864 [2024-12-16 22:42:20.527585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.864 [2024-12-16 22:42:20.527995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.864 [2024-12-16 22:42:20.528002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.864 [2024-12-16 22:42:20.528008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.864 [2024-12-16 22:42:20.528014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.864 [2024-12-16 22:42:20.539408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.864 [2024-12-16 22:42:20.539857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.864 [2024-12-16 22:42:20.539872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.864 [2024-12-16 22:42:20.539879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.864 [2024-12-16 22:42:20.540047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.864 [2024-12-16 22:42:20.540221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.864 [2024-12-16 22:42:20.540229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.864 [2024-12-16 22:42:20.540236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.864 [2024-12-16 22:42:20.540242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:30.864 [2024-12-16 22:42:20.552451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:30.864 [2024-12-16 22:42:20.552847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.864 [2024-12-16 22:42:20.552864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:30.864 [2024-12-16 22:42:20.552871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:30.864 [2024-12-16 22:42:20.553044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:30.864 [2024-12-16 22:42:20.553223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:30.864 [2024-12-16 22:42:20.553232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:30.864 [2024-12-16 22:42:20.553239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:30.864 [2024-12-16 22:42:20.553245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.124 [2024-12-16 22:42:20.565563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.124 [2024-12-16 22:42:20.566005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-16 22:42:20.566054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.124 [2024-12-16 22:42:20.566078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.124 [2024-12-16 22:42:20.566650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.124 [2024-12-16 22:42:20.566819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.124 [2024-12-16 22:42:20.566827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.124 [2024-12-16 22:42:20.566833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.124 [2024-12-16 22:42:20.566838] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.124 [2024-12-16 22:42:20.578460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.124 [2024-12-16 22:42:20.578791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-16 22:42:20.578807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.124 [2024-12-16 22:42:20.578814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.124 [2024-12-16 22:42:20.578987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.124 [2024-12-16 22:42:20.579159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.124 [2024-12-16 22:42:20.579167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.124 [2024-12-16 22:42:20.579173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.579180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.591399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.591816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.591832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.591839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.592007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.592176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.592185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.592197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.592203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.604416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.604733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.604756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.604927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.605096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.605104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.605110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.605117] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.617518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.617841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.617857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.617865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.618037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.618217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.618229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.618235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.618241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.630429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.630775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.630807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.630831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.631426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.632015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.632023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.632029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.632035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.643400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.643785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.643828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.643851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.644293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.644462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.644470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.644476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.644483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.656345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.656672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.656688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.656695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.656863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.657030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.657039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.657045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.657054] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.669236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.669608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.669624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.669631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.669799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.669967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.669976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.669982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.669988] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.682186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.682480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.682496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.682503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.682672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.682841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.682850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.682856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.682862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.695213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.695494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.695509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.695517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.695685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.125 [2024-12-16 22:42:20.695853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.125 [2024-12-16 22:42:20.695862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.125 [2024-12-16 22:42:20.695868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.125 [2024-12-16 22:42:20.695875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.125 [2024-12-16 22:42:20.708147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.125 [2024-12-16 22:42:20.708512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-16 22:42:20.708528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.125 [2024-12-16 22:42:20.708536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.125 [2024-12-16 22:42:20.708709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.708882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.708890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.708896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.708902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.721129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.721522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.721538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.721545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.721718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.721890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.721898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.721905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.721911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.734108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.734498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.734543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.734566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.735056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.735232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.735240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.735246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.735252] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.747134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.747477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.747493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.747500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.747672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.747841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.747848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.747855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.747860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.760093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.760425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.760442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.760450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.760618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.760786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.760795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.760802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.760808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.772993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.773385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.773402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.773409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.773578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.773746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.773754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.773761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.773767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.785929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.786288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.786305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.786312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.786480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.786648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.786660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.786666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.786672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.798828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.799247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.799293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.799316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.799900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.800427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.800435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.800442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.800448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.811776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.812189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.812211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.812219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.812388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.126 [2024-12-16 22:42:20.812560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.126 [2024-12-16 22:42:20.812570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.126 [2024-12-16 22:42:20.812577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.126 [2024-12-16 22:42:20.812583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.126 [2024-12-16 22:42:20.824901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.126 [2024-12-16 22:42:20.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-16 22:42:20.825350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.126 [2024-12-16 22:42:20.825357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.126 [2024-12-16 22:42:20.825530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.387 [2024-12-16 22:42:20.825707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.387 [2024-12-16 22:42:20.825716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.387 [2024-12-16 22:42:20.825723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.387 [2024-12-16 22:42:20.825732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.387 [2024-12-16 22:42:20.837919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.387 [2024-12-16 22:42:20.838321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.387 [2024-12-16 22:42:20.838367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.387 [2024-12-16 22:42:20.838391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.387 [2024-12-16 22:42:20.838974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.387 [2024-12-16 22:42:20.839516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.387 [2024-12-16 22:42:20.839525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.387 [2024-12-16 22:42:20.839531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.387 [2024-12-16 22:42:20.839537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.387 [2024-12-16 22:42:20.850935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.387 [2024-12-16 22:42:20.851339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.387 [2024-12-16 22:42:20.851356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.387 [2024-12-16 22:42:20.851363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.387 [2024-12-16 22:42:20.851531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.387 [2024-12-16 22:42:20.851699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.387 [2024-12-16 22:42:20.851707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.387 [2024-12-16 22:42:20.851713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.387 [2024-12-16 22:42:20.851719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.387 [2024-12-16 22:42:20.863939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.387 [2024-12-16 22:42:20.864303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.387 [2024-12-16 22:42:20.864320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.387 [2024-12-16 22:42:20.864327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.387 [2024-12-16 22:42:20.864516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.387 [2024-12-16 22:42:20.864689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.864697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.864704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.864710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.876930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.877304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.877349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.877373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.877600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.877768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.877776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.877782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.877789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.889797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.890230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.890237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.890405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.890573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.890581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.890588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.890594] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.902659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.902996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.903012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.903019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.903187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.903362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.903370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.903376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.903383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.915494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.915911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.915927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.915934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.916105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.916280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.916289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.916296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.916302] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.928365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.928666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.928681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.928689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.928856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.929024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.929032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.929038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.929044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.941101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.941438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.941455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.941462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.941630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.941804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.941813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.941820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.941826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.953893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.954335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.954379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.954402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.954936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.955105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.955116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.955123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.955130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.966766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.967096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.967111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.967118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.967293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.967462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.967470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.967476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.967483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.979535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.979986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.980051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.980649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.981067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.981075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.981081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.388 [2024-12-16 22:42:20.981087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.388 [2024-12-16 22:42:20.992392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.388 [2024-12-16 22:42:20.992802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.388 [2024-12-16 22:42:20.992817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.388 [2024-12-16 22:42:20.992824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.388 [2024-12-16 22:42:20.992992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.388 [2024-12-16 22:42:20.993160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.388 [2024-12-16 22:42:20.993168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.388 [2024-12-16 22:42:20.993174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:20.993184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 [2024-12-16 22:42:21.005312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.005666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.005681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.005688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.005856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.006023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.006031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.006038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.006044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 [2024-12-16 22:42:21.018081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.018502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.018547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.018570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.019050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.019223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.019232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.019238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.019245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 [2024-12-16 22:42:21.030811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.031220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.031237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.031245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.031412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.031581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.031589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.031595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.031602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 [2024-12-16 22:42:21.043763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.044176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.044197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.044205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.044372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.044540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.044546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.044553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.044558] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 6072.40 IOPS, 23.72 MiB/s [2024-12-16T21:42:21.090Z] [2024-12-16 22:42:21.056604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.057040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.057072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.057097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.057695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.058200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.058209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.058215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.058221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 [2024-12-16 22:42:21.069333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.069764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.069807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.069830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.070378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.070772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.070789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.070803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.070816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.389 [2024-12-16 22:42:21.084345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.389 [2024-12-16 22:42:21.084771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.389 [2024-12-16 22:42:21.084793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.389 [2024-12-16 22:42:21.084808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.389 [2024-12-16 22:42:21.085062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.389 [2024-12-16 22:42:21.085326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.389 [2024-12-16 22:42:21.085338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.389 [2024-12-16 22:42:21.085348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.389 [2024-12-16 22:42:21.085357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.650 [2024-12-16 22:42:21.097381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.650 [2024-12-16 22:42:21.097790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-12-16 22:42:21.097806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-12-16 22:42:21.097814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.650 [2024-12-16 22:42:21.097986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.650 [2024-12-16 22:42:21.098159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.650 [2024-12-16 22:42:21.098167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.650 [2024-12-16 22:42:21.098173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.650 [2024-12-16 22:42:21.098179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.650 [2024-12-16 22:42:21.110187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.650 [2024-12-16 22:42:21.110582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-12-16 22:42:21.110598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-12-16 22:42:21.110605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.650 [2024-12-16 22:42:21.110774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.650 [2024-12-16 22:42:21.110941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.650 [2024-12-16 22:42:21.110949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.650 [2024-12-16 22:42:21.110955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.650 [2024-12-16 22:42:21.110961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.650 [2024-12-16 22:42:21.123040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.650 [2024-12-16 22:42:21.123453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-12-16 22:42:21.123497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-12-16 22:42:21.123520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.650 [2024-12-16 22:42:21.124022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.650 [2024-12-16 22:42:21.124196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.650 [2024-12-16 22:42:21.124207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.650 [2024-12-16 22:42:21.124214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.650 [2024-12-16 22:42:21.124220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.650 [2024-12-16 22:42:21.135899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.650 [2024-12-16 22:42:21.136314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-12-16 22:42:21.136358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-12-16 22:42:21.136382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.650 [2024-12-16 22:42:21.136966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.650 [2024-12-16 22:42:21.137423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.650 [2024-12-16 22:42:21.137431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.650 [2024-12-16 22:42:21.137437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.650 [2024-12-16 22:42:21.137443] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.650 [2024-12-16 22:42:21.148750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.650 [2024-12-16 22:42:21.149140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-12-16 22:42:21.149155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-12-16 22:42:21.149162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.650 [2024-12-16 22:42:21.149350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.650 [2024-12-16 22:42:21.149518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.650 [2024-12-16 22:42:21.149526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.650 [2024-12-16 22:42:21.149532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.650 [2024-12-16 22:42:21.149538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.650 [2024-12-16 22:42:21.161514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.650 [2024-12-16 22:42:21.161899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-12-16 22:42:21.161914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-12-16 22:42:21.161921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.650 [2024-12-16 22:42:21.162080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.650 [2024-12-16 22:42:21.162263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.162271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.162277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.162286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.174315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.174652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.174667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.174674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.174832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.174991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.174998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.175004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.175010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.187176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.187621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.187637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.187644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.187812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.187980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.187988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.187995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.188001] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.199916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.200327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.200343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.200351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.200519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.200686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.200695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.200701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.200708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.212725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.213178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.213235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.213259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.213706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.213874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.213882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.213888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.213894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.225515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.225938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.225954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.225961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.226129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.226301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.226310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.226316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.226321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.238311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.238745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.238760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.238768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.238936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.239104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.239112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.239118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.239124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.251044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.251389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.251405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.251412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.251583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.251751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.251759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.251765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.251771] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.263775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.264212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.264229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.264236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.264404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.264572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.264581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.264587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.264593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.276546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.276894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.276910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.276916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.277075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.277257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.277265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.277271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.651 [2024-12-16 22:42:21.277278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.651 [2024-12-16 22:42:21.289320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.651 [2024-12-16 22:42:21.289682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-12-16 22:42:21.289698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.651 [2024-12-16 22:42:21.289705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.651 [2024-12-16 22:42:21.289864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.651 [2024-12-16 22:42:21.290023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.651 [2024-12-16 22:42:21.290034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.651 [2024-12-16 22:42:21.290040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.652 [2024-12-16 22:42:21.290046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.652 [2024-12-16 22:42:21.302194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.652 [2024-12-16 22:42:21.302546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-12-16 22:42:21.302561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.652 [2024-12-16 22:42:21.302568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.652 [2024-12-16 22:42:21.302736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.652 [2024-12-16 22:42:21.302904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.652 [2024-12-16 22:42:21.302912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.652 [2024-12-16 22:42:21.302918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.652 [2024-12-16 22:42:21.302924] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.652 [2024-12-16 22:42:21.314918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.652 [2024-12-16 22:42:21.315290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-12-16 22:42:21.315339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.652 [2024-12-16 22:42:21.315363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.652 [2024-12-16 22:42:21.315903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.652 [2024-12-16 22:42:21.316071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.652 [2024-12-16 22:42:21.316080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.652 [2024-12-16 22:42:21.316086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.652 [2024-12-16 22:42:21.316093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.652 [2024-12-16 22:42:21.327718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.652 [2024-12-16 22:42:21.328050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-12-16 22:42:21.328065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.652 [2024-12-16 22:42:21.328072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.652 [2024-12-16 22:42:21.328253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.652 [2024-12-16 22:42:21.328422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.652 [2024-12-16 22:42:21.328430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.652 [2024-12-16 22:42:21.328436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.652 [2024-12-16 22:42:21.328445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.652 [2024-12-16 22:42:21.340485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.652 [2024-12-16 22:42:21.340901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-12-16 22:42:21.340916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.652 [2024-12-16 22:42:21.340923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.652 [2024-12-16 22:42:21.341082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.652 [2024-12-16 22:42:21.341265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.652 [2024-12-16 22:42:21.341274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.652 [2024-12-16 22:42:21.341280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.652 [2024-12-16 22:42:21.341286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.913 [2024-12-16 22:42:21.353594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.913 [2024-12-16 22:42:21.353853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.913 [2024-12-16 22:42:21.353869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.913 [2024-12-16 22:42:21.353876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.913 [2024-12-16 22:42:21.354044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.913 [2024-12-16 22:42:21.354217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.913 [2024-12-16 22:42:21.354226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.913 [2024-12-16 22:42:21.354232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.913 [2024-12-16 22:42:21.354238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.913 [2024-12-16 22:42:21.366552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.913 [2024-12-16 22:42:21.366973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.913 [2024-12-16 22:42:21.367015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.913 [2024-12-16 22:42:21.367038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.913 [2024-12-16 22:42:21.367543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.913 [2024-12-16 22:42:21.367712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.913 [2024-12-16 22:42:21.367720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.913 [2024-12-16 22:42:21.367727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.913 [2024-12-16 22:42:21.367733] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.913 [2024-12-16 22:42:21.379289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.913 [2024-12-16 22:42:21.379718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.913 [2024-12-16 22:42:21.379733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.913 [2024-12-16 22:42:21.379740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.913 [2024-12-16 22:42:21.379899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.913 [2024-12-16 22:42:21.380058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.913 [2024-12-16 22:42:21.380065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.913 [2024-12-16 22:42:21.380071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.913 [2024-12-16 22:42:21.380077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.913 [2024-12-16 22:42:21.392128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.913 [2024-12-16 22:42:21.392570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.913 [2024-12-16 22:42:21.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.913 [2024-12-16 22:42:21.392593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.913 [2024-12-16 22:42:21.392762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.913 [2024-12-16 22:42:21.392929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.913 [2024-12-16 22:42:21.392937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.913 [2024-12-16 22:42:21.392944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.913 [2024-12-16 22:42:21.392950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.913 [2024-12-16 22:42:21.405026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.913 [2024-12-16 22:42:21.405469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.913 [2024-12-16 22:42:21.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.913 [2024-12-16 22:42:21.405493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.913 [2024-12-16 22:42:21.405678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.913 [2024-12-16 22:42:21.405847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.405855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.405861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.405867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.417754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.418101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.418117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.418124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.418310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.418479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.418487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.418493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.418499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.430510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.430927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.430942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.430949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.431108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.431290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.431299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.431305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.431311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.443348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.443767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.443783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.443790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.443949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.444107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.444115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.444120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.444126] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.456188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.456610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.456625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.456632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.456791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.456950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.456960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.456966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.456972] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.468957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.469365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.469382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.469389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.469548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.469706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.469714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.469720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.469726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.481751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.482174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.482230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.482254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.482838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.483258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.483266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.483272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.483279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.494546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.494974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.495018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.495041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.495641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.496126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.496133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.496140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.496149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.507441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.507873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.507918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.507941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.508366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.508536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.508544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.508550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.508556] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.520200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.520615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.520630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.520637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.520796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.520954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.520962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.520968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.914 [2024-12-16 22:42:21.520974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.914 [2024-12-16 22:42:21.533017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.914 [2024-12-16 22:42:21.533430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.914 [2024-12-16 22:42:21.533446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.914 [2024-12-16 22:42:21.533453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.914 [2024-12-16 22:42:21.533621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.914 [2024-12-16 22:42:21.533789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.914 [2024-12-16 22:42:21.533797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.914 [2024-12-16 22:42:21.533803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.533809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.915 [2024-12-16 22:42:21.545855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.915 [2024-12-16 22:42:21.546226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.915 [2024-12-16 22:42:21.546242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.915 [2024-12-16 22:42:21.546249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.915 [2024-12-16 22:42:21.546432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.915 [2024-12-16 22:42:21.546599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.915 [2024-12-16 22:42:21.546607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.915 [2024-12-16 22:42:21.546614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.546619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.915 [2024-12-16 22:42:21.558752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.915 [2024-12-16 22:42:21.559099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.915 [2024-12-16 22:42:21.559114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.915 [2024-12-16 22:42:21.559121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.915 [2024-12-16 22:42:21.559294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.915 [2024-12-16 22:42:21.559462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.915 [2024-12-16 22:42:21.559471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.915 [2024-12-16 22:42:21.559477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.559483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.915 [2024-12-16 22:42:21.571496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.915 [2024-12-16 22:42:21.571917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.915 [2024-12-16 22:42:21.571960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.915 [2024-12-16 22:42:21.571984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.915 [2024-12-16 22:42:21.572442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.915 [2024-12-16 22:42:21.572610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.915 [2024-12-16 22:42:21.572619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.915 [2024-12-16 22:42:21.572625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.572631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.915 [2024-12-16 22:42:21.584243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.915 [2024-12-16 22:42:21.584546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.915 [2024-12-16 22:42:21.584562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.915 [2024-12-16 22:42:21.584569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.915 [2024-12-16 22:42:21.584740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.915 [2024-12-16 22:42:21.584908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.915 [2024-12-16 22:42:21.584916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.915 [2024-12-16 22:42:21.584922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.584928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.915 [2024-12-16 22:42:21.596972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.915 [2024-12-16 22:42:21.597404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.915 [2024-12-16 22:42:21.597419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.915 [2024-12-16 22:42:21.597426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.915 [2024-12-16 22:42:21.597585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.915 [2024-12-16 22:42:21.597744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.915 [2024-12-16 22:42:21.597751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.915 [2024-12-16 22:42:21.597757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.597763] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:31.915 [2024-12-16 22:42:21.609894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:31.915 [2024-12-16 22:42:21.610247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.915 [2024-12-16 22:42:21.610265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:31.915 [2024-12-16 22:42:21.610272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:31.915 [2024-12-16 22:42:21.610445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:31.915 [2024-12-16 22:42:21.610621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:31.915 [2024-12-16 22:42:21.610629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:31.915 [2024-12-16 22:42:21.610635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:31.915 [2024-12-16 22:42:21.610642] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.176 [2024-12-16 22:42:21.622808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.176 [2024-12-16 22:42:21.623242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.176 [2024-12-16 22:42:21.623258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.176 [2024-12-16 22:42:21.623265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.176 [2024-12-16 22:42:21.623433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.176 [2024-12-16 22:42:21.623600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.623611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.623617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.623623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.635648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.635977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.636024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.636048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.636618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.636985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.637001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.637013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.637026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.650023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.650431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.650451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.650460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.650695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.650930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.650941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.650949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.650958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.662807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.663250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.663266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.663274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.663442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.663610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.663618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.663624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.663633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.675531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.675942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.675957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.675964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.676123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.676308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.676317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.676323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.676329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.688306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.688656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.688672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.688680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.688847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.689015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.689023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.689029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.689035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.701076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.701485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.701501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.701508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.701676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.701844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.701852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.701858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.701864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 535749 Killed "${NVMF_APP[@]}" "$@" 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=536975 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 536975 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 536975 ']' 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.177 [2024-12-16 22:42:21.714168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.177 [2024-12-16 22:42:21.714602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.714619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.714626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.714799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.177 [2024-12-16 22:42:21.714973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.714982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.714989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.714996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.727206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.177 [2024-12-16 22:42:21.727543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.177 [2024-12-16 22:42:21.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.177 [2024-12-16 22:42:21.727566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.177 [2024-12-16 22:42:21.727739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.177 [2024-12-16 22:42:21.727912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.177 [2024-12-16 22:42:21.727919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.177 [2024-12-16 22:42:21.727926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.177 [2024-12-16 22:42:21.727932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.177 [2024-12-16 22:42:21.740316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.740740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.740756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.740764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.740938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.741111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.741120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.741126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.741133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.753542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.753972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.753988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.753996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.754170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.754348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.754357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.754364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.754371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.761021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:32.178 [2024-12-16 22:42:21.761059] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:32.178 [2024-12-16 22:42:21.766649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.767053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.767069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.767078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.767257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.767432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.767440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.767447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.767454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.779772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.780205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.780223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.780231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.780404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.780578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.780586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.780593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.780600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.792861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.793267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.793284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.793292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.793465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.793639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.793647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.793654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.793660] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.805885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.806327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.806344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.806352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.806526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.806699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.806707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.806715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.806722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.818895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.819257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.819277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.819286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.819459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.819632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.819640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.819647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.819653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.831811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.832253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.832270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.832278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.832451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.832624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.832633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.832639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.832645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.840820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:32.178 [2024-12-16 22:42:21.844877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.845300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.845317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.845325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.845498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.845671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.845680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.845686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.178 [2024-12-16 22:42:21.845692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.178 [2024-12-16 22:42:21.857861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.178 [2024-12-16 22:42:21.858301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.178 [2024-12-16 22:42:21.858319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.178 [2024-12-16 22:42:21.858326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.178 [2024-12-16 22:42:21.858507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.178 [2024-12-16 22:42:21.858681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.178 [2024-12-16 22:42:21.858689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.178 [2024-12-16 22:42:21.858696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.179 [2024-12-16 22:42:21.858703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.179 [2024-12-16 22:42:21.862863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.179 [2024-12-16 22:42:21.862890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.179 [2024-12-16 22:42:21.862897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.179 [2024-12-16 22:42:21.862903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.179 [2024-12-16 22:42:21.862908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.179 [2024-12-16 22:42:21.864178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:32.179 [2024-12-16 22:42:21.864287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:32.179 [2024-12-16 22:42:21.864286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.179 [2024-12-16 22:42:21.870873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.179 [2024-12-16 22:42:21.871331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.179 [2024-12-16 22:42:21.871352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.179 [2024-12-16 22:42:21.871362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.179 [2024-12-16 22:42:21.871538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.179 [2024-12-16 22:42:21.871714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.179 [2024-12-16 22:42:21.871723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.179 [2024-12-16 22:42:21.871731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.179 [2024-12-16 22:42:21.871739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.883957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.884411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.884433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.884442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.884616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.884792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.884801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.884808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.884816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.897038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.897493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.897513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.897522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.897697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.897872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.897881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.897889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.897896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.910116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.910577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.910597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.910606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.910782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.910959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.910967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.910974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.910982] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.923209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.923661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.923681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.923691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.923865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.924040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.924049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.924057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.924064] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.936285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.936722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.936744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.936752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.936926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.937100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.937109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.937116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.937124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.949357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.949791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.949816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.949989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.950162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.950171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.950178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.950184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.440 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:32.440 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:32.440 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:32.440 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.440 [2024-12-16 22:42:21.962401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.962752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.962770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.962778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.962951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.963124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.963133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.963140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.963146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.975526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.975912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.440 [2024-12-16 22:42:21.975935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.440 [2024-12-16 22:42:21.976108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.440 [2024-12-16 22:42:21.976288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.440 [2024-12-16 22:42:21.976297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.440 [2024-12-16 22:42:21.976304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.440 [2024-12-16 22:42:21.976310] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.440 [2024-12-16 22:42:21.988526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.440 [2024-12-16 22:42:21.988821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.440 [2024-12-16 22:42:21.988837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.441 [2024-12-16 22:42:21.988845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.441 [2024-12-16 22:42:21.989018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.441 [2024-12-16 22:42:21.989197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.441 [2024-12-16 22:42:21.989207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.441 [2024-12-16 22:42:21.989213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.441 [2024-12-16 22:42:21.989220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.441 [2024-12-16 22:42:21.995046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.441 22:42:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.441 [2024-12-16 22:42:22.001591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.441 [2024-12-16 22:42:22.001905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.441 [2024-12-16 22:42:22.001921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.441 [2024-12-16 22:42:22.001929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.441 [2024-12-16 22:42:22.002101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.441 [2024-12-16 22:42:22.002283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.441 [2024-12-16 22:42:22.002292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.441 [2024-12-16 22:42:22.002298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.441 [2024-12-16 22:42:22.002304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.441 [2024-12-16 22:42:22.014686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.441 [2024-12-16 22:42:22.015119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.441 [2024-12-16 22:42:22.015135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.441 [2024-12-16 22:42:22.015143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.441 [2024-12-16 22:42:22.015322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.441 [2024-12-16 22:42:22.015496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.441 [2024-12-16 22:42:22.015504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.441 [2024-12-16 22:42:22.015510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.441 [2024-12-16 22:42:22.015516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.441 [2024-12-16 22:42:22.027731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.441 [2024-12-16 22:42:22.028136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.441 [2024-12-16 22:42:22.028152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.441 [2024-12-16 22:42:22.028159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.441 [2024-12-16 22:42:22.028340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.441 [2024-12-16 22:42:22.028514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.441 [2024-12-16 22:42:22.028522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.441 [2024-12-16 22:42:22.028529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.441 [2024-12-16 22:42:22.028535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.441 Malloc0 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.441 [2024-12-16 22:42:22.040769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.441 [2024-12-16 22:42:22.041157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.441 [2024-12-16 22:42:22.041172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.441 [2024-12-16 22:42:22.041180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.441 [2024-12-16 22:42:22.041362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.441 [2024-12-16 22:42:22.041534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.441 [2024-12-16 22:42:22.041543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.441 [2024-12-16 22:42:22.041550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.441 [2024-12-16 22:42:22.041556] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.441 5060.33 IOPS, 19.77 MiB/s [2024-12-16T21:42:22.142Z] [2024-12-16 22:42:22.053755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.441 [2024-12-16 22:42:22.054091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.441 [2024-12-16 22:42:22.054108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a74cf0 with addr=10.0.0.2, port=4420 00:36:32.441 [2024-12-16 22:42:22.054115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a74cf0 is same with the state(6) to be set 00:36:32.441 [2024-12-16 22:42:22.054294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a74cf0 (9): Bad file descriptor 00:36:32.441 [2024-12-16 22:42:22.054467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:32.441 [2024-12-16 22:42:22.054475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:32.441 [2024-12-16 22:42:22.054482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:32.441 [2024-12-16 22:42:22.054489] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.441 [2024-12-16 22:42:22.058463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.441 22:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 536074 00:36:32.441 [2024-12-16 22:42:22.066875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:32.441 [2024-12-16 22:42:22.095298] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:34.758 5893.86 IOPS, 23.02 MiB/s [2024-12-16T21:42:25.397Z] 6592.25 IOPS, 25.75 MiB/s [2024-12-16T21:42:26.333Z] 7121.44 IOPS, 27.82 MiB/s [2024-12-16T21:42:27.291Z] 7541.40 IOPS, 29.46 MiB/s [2024-12-16T21:42:28.267Z] 7902.73 IOPS, 30.87 MiB/s [2024-12-16T21:42:29.204Z] 8183.08 IOPS, 31.97 MiB/s [2024-12-16T21:42:30.140Z] 8444.38 IOPS, 32.99 MiB/s [2024-12-16T21:42:31.077Z] 8658.50 IOPS, 33.82 MiB/s [2024-12-16T21:42:31.337Z] 8829.73 IOPS, 34.49 MiB/s 00:36:41.636 Latency(us) 00:36:41.636 [2024-12-16T21:42:31.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:41.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:41.636 Verification LBA range: start 0x0 length 0x4000 00:36:41.636 Nvme1n1 : 15.05 8809.67 34.41 10957.05 0.00 6438.40 429.10 43940.33 00:36:41.636 [2024-12-16T21:42:31.337Z] =================================================================================================================== 00:36:41.636 [2024-12-16T21:42:31.337Z] Total : 8809.67 34.41 10957.05 0.00 6438.40 429.10 43940.33 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:41.636 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:41.636 rmmod nvme_tcp 00:36:41.636 rmmod nvme_fabrics 00:36:41.636 rmmod nvme_keyring 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 536975 ']' 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 536975 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 536975 ']' 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 536975 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536975 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536975' 00:36:41.896 killing process with pid 536975 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 536975 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 536975 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.896 22:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:44.433 00:36:44.433 real 0m25.992s 00:36:44.433 user 1m0.847s 00:36:44.433 sys 0m6.674s 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:44.433 ************************************ 00:36:44.433 END TEST nvmf_bdevperf 00:36:44.433 ************************************ 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.433 ************************************ 00:36:44.433 START TEST nvmf_target_disconnect 00:36:44.433 ************************************ 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:44.433 * Looking for test storage... 00:36:44.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.433 --rc genhtml_branch_coverage=1 00:36:44.433 --rc genhtml_function_coverage=1 00:36:44.433 --rc genhtml_legend=1 00:36:44.433 --rc geninfo_all_blocks=1 00:36:44.433 --rc geninfo_unexecuted_blocks=1 00:36:44.433 00:36:44.433 ' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.433 --rc genhtml_branch_coverage=1 00:36:44.433 --rc genhtml_function_coverage=1 00:36:44.433 --rc genhtml_legend=1 00:36:44.433 --rc geninfo_all_blocks=1 00:36:44.433 --rc geninfo_unexecuted_blocks=1 00:36:44.433 00:36:44.433 ' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.433 --rc genhtml_branch_coverage=1 00:36:44.433 --rc genhtml_function_coverage=1 00:36:44.433 --rc genhtml_legend=1 00:36:44.433 --rc geninfo_all_blocks=1 00:36:44.433 --rc geninfo_unexecuted_blocks=1 00:36:44.433 00:36:44.433 ' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:44.433 --rc genhtml_branch_coverage=1 00:36:44.433 --rc genhtml_function_coverage=1 00:36:44.433 --rc genhtml_legend=1 00:36:44.433 --rc geninfo_all_blocks=1 00:36:44.433 --rc geninfo_unexecuted_blocks=1 00:36:44.433 00:36:44.433 ' 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:44.433 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:44.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:44.434 22:42:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:51.014 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:51.014 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:51.014 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:51.014 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:51.014 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:51.014 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:51.015 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:51.015 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:51.015 Found net devices under 0000:af:00.0: cvl_0_0 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:51.015 Found net devices under 0000:af:00.1: cvl_0_1 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:51.015 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:51.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:51.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:36:51.016 00:36:51.016 --- 10.0.0.2 ping statistics --- 00:36:51.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.016 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:51.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:51.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:36:51.016 00:36:51.016 --- 10.0.0.1 ping statistics --- 00:36:51.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:51.016 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:51.016 ************************************ 00:36:51.016 START TEST nvmf_target_disconnect_tc1 00:36:51.016 ************************************ 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:51.016 22:42:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:51.016 [2024-12-16 22:42:40.002980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.017 [2024-12-16 22:42:40.003035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1763590 with addr=10.0.0.2, port=4420 00:36:51.017 [2024-12-16 22:42:40.003077] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:51.017 [2024-12-16 22:42:40.003093] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:51.017 [2024-12-16 22:42:40.003103] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:51.017 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:51.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:51.017 Initializing NVMe Controllers 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:51.017 00:36:51.017 real 0m0.121s 00:36:51.017 user 0m0.052s 00:36:51.017 sys 0m0.069s 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 ************************************ 00:36:51.017 END TEST nvmf_target_disconnect_tc1 00:36:51.017 ************************************ 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 ************************************ 00:36:51.017 START TEST nvmf_target_disconnect_tc2 00:36:51.017 ************************************ 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542045 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542045 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542045 ']' 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 [2024-12-16 22:42:40.141707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:51.017 [2024-12-16 22:42:40.141747] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.017 [2024-12-16 22:42:40.217140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.017 [2024-12-16 22:42:40.239268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.017 [2024-12-16 22:42:40.239309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.017 [2024-12-16 22:42:40.239316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.017 [2024-12-16 22:42:40.239322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.017 [2024-12-16 22:42:40.239327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.017 [2024-12-16 22:42:40.240701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:51.017 [2024-12-16 22:42:40.240813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:51.017 [2024-12-16 22:42:40.240900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:51.017 [2024-12-16 22:42:40.240901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 Malloc0 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 [2024-12-16 22:42:40.410605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.018 [2024-12-16 22:42:40.439595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=542072 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:51.018 22:42:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:52.936 22:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 542045 00:36:52.936 22:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Write completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 [2024-12-16 22:42:42.471654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.936 Read completed with error (sct=0, sc=8) 00:36:52.936 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 [2024-12-16 22:42:42.471856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Write completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 Read completed with error (sct=0, sc=8) 00:36:52.937 starting I/O failed 00:36:52.937 [2024-12-16 22:42:42.472044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:52.937 [2024-12-16 22:42:42.472146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.472339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.472443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.472603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.472685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.472785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.472935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.473982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.473991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.474121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.474130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.474208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.937 [2024-12-16 22:42:42.474218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.937 qpair failed and we were unable to recover it. 00:36:52.937 [2024-12-16 22:42:42.474296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.474305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.474527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.474537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.474591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.474601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.474849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.474859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.474996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.475898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.475908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.476926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.476935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.477146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.477177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.477409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.477441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.477561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.477592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.477775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.477805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.478074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.478104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.478360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.478383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.478496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.478734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.478756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.478861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.478883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.479028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.479050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.479151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.479172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.479350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.479372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.479475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.479496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.479846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.479867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.480100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.480132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.480371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.480404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.480525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.480556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.938 qpair failed and we were unable to recover it. 00:36:52.938 [2024-12-16 22:42:42.480657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.938 [2024-12-16 22:42:42.480687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.480902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.480933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.481033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.481072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.481289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.481312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.481393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.481412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.481522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.481542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.481757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.481780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.481947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.481969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.482060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.482080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.482251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.482274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.482422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.482443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.482528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.482548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.482691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.482712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.482807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.482826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.483836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.483858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.484028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.484049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.484211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.484234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.484397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.484418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.484535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.484556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.484725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.484746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.484856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.484876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.485914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.485934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.486147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.486168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.486339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.486362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.486529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.486555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.486711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.486733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.939 qpair failed and we were unable to recover it. 00:36:52.939 [2024-12-16 22:42:42.486918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.939 [2024-12-16 22:42:42.486941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.487956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.487978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.488220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.488244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.488424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.488448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.488632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.488663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.488992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.489024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.489216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.489242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.489349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.489374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.489484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.489508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.489729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.489755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.489910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.489935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.490211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.490441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.490473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.490643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.490692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.490874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.490906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.491175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.491227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.491334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.491359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.491538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.491563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.491660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.491684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.491792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.491816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.491995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.492020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.492209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.492234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.492400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.492427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.492675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.492700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.492828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.492852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.493021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.493047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.493220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.493246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.493353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.493378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.493544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.493570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.493747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.493773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.493943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.493967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.494133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.494157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.494469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.494501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.940 qpair failed and we were unable to recover it. 00:36:52.940 [2024-12-16 22:42:42.494616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.940 [2024-12-16 22:42:42.494647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.494832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.494863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.495042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.495072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.495342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.495375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.495546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.495571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.495742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.495764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.495956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.495979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.496217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.496241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.496328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.496350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.496460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.496482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.496701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.496723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.496958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.496980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.497210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.497235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.497483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.497506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.497744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.497767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.497975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.497997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.498243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.498267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.498530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.498559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.498737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.498772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.498983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.499014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.499188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.499228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.499477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.499507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.499668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.499777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.499807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.500013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.500042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.500292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.500323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.500495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.500525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.500762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.500791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.500969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.501000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.501297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.501330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.501498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.501530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.501755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.501786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.941 qpair failed and we were unable to recover it. 00:36:52.941 [2024-12-16 22:42:42.502003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.941 [2024-12-16 22:42:42.502034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.502221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.502254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.502482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.502512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.502634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.502665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.502894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.502925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.503235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.503268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.503372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.503401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.503539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.503571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.503764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.503794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.503984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.504015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.504189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.504233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.504356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.504387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.504574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.504606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.504759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.504792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.504986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.505017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.505214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.505248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.505413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.505445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.505619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.505651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.505770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.505801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.505928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.505959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.506234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.506266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.506543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.506575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.506694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.506725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.506849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.506881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.507147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.507179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.507383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.507415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.507541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.507578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.507871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.507903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.508095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.508125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.508317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.508350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.508525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.508556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.508676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.508707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.508894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.508925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.509027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.509059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.509281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.509314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.509454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.509485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.509778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.509809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.510089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.510119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.510344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.510377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.942 [2024-12-16 22:42:42.510582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.942 [2024-12-16 22:42:42.510612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.942 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.510755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.510788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.510998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.511029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.511273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.511305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.511439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.511471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.511670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.511702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.511905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.511936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.512172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.512218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.512370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.512401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.512588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.512619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.512788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.512819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.512951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.512983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.513176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.513220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.513335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.513365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.513504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.513536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.513751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.513903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.513934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.514075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.514108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.514233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.514267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.514371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.514404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.514571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.514602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.514726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.514758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.514960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.514992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.515165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.515207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.515347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.515379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.515553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.515585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.515688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.515827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.515864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.516103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.516134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.516344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.516377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.516546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.516577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.516747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.516778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.516951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.516983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.517224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.517257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.517445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.517476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.517672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.517703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.517809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.517841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.517958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.517990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.518250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.518283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.943 [2024-12-16 22:42:42.518496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.943 [2024-12-16 22:42:42.518528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.943 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.518667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.518698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.518812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.518844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.519135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.519167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.519298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.519330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.519437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.519469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.519603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.519634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.519880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.519911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.520032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.520063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.520236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.520269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.520439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.520470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.520588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.520620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.520761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.520793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.520912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.520943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.521125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.521156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.521298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.521332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.521591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.521622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.521855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.521887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.522073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.522105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.522291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.522325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.522520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.522550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.522789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.522820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.523024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.523055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.523226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.523258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.523437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.523469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.523757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.523788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.524082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.524113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.524325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.524466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.524502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.524708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.524739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.524996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.525027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.525217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.525249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.525386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.525417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.525613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.525644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.525833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.525863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.525968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.525998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.526239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.526271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.526390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.526421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.526611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.526643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.526821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.944 [2024-12-16 22:42:42.526852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.944 qpair failed and we were unable to recover it. 00:36:52.944 [2024-12-16 22:42:42.527035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.527066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.527362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.527395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.527570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.527601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.527772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.527803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.528048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.528078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.528322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.528360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.528556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.528587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.528830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.528979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.529009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.529253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.529287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.529422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.529457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.529632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.529662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.529934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.529965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.530148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.530180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.530366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.530417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.530745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.530819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.531024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.531060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.531360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.531396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.531595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.531627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.531809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.531961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.531992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.532113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.532144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.532260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.532293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.532491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.532523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.532645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.532677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.532803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.532835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.533075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.533106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.533378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.533412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.533606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.533638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.533942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.533985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.534177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.534220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.534472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.534505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.534636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.534667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.945 [2024-12-16 22:42:42.534907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.945 [2024-12-16 22:42:42.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.945 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.535053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.535085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.535286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.535319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.535521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.535553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.535840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.535872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.536112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.536145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.536344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.536377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.536646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.536678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.536881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.536912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.537107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.537145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.537352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.537385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.537571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.537603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.537723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.537754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.538017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.538048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.538251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.538285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.538456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.538487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.538664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.538696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.539011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.539042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.539234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.539267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.539508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.539539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.539726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.539757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.540044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.540075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.540259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.540291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.540510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.540542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.540674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.540706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.541012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.541043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.541228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.541261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.541450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.541482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.541653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.541684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.541815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.541847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.541949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.541981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.542223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.542257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.542384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.542416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.542544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.542576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.542840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.542873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.543096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.543127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.543378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.543417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.543559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.543591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.543826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.543857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.544115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.946 [2024-12-16 22:42:42.544147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.946 qpair failed and we were unable to recover it. 00:36:52.946 [2024-12-16 22:42:42.544303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.544337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.544452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.544483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.544653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.544684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.544890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.544922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.545189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.545230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.545367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.545398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.545639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.545670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.545789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.545821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.546080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.546111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.546280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.546315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.546518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.546549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.546755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.546787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.547093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.547124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.547376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.547409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.547601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.547632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.547776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.547807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.547979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.548010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.548280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.548313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.548448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.548479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.548670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.548702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.549008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.549039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.549166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.549205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.549397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.549429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.549546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.549582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.549699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.549730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.549850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.549881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.550149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.550180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.550337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.550370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.550610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.550641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.550811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.550843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.551059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.551091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.551277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.551310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.551439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.551471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.551607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.551640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.551812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.551843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.552017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.552048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.552275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.552309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.552488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.552519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.552640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.552671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.552886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.947 [2024-12-16 22:42:42.552918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.947 qpair failed and we were unable to recover it. 00:36:52.947 [2024-12-16 22:42:42.553029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.553059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.553229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.553262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.553408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.553440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.553618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.553649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.553825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.553856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.554035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.554066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.554230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.554427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.554459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.554635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.554665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.554933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.554965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.555177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.555215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.555346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.555378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.555498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.555529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.555721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.555752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.555883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.555914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.556110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.556142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.556408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.556441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.556616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.556648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.556764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.556795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.556974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.557006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.557121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.557151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.557451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.557485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.557605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.557637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.557813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.557845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Read completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 Write completed with error (sct=0, sc=8) 00:36:52.948 starting I/O failed 00:36:52.948 [2024-12-16 22:42:42.558501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.948 [2024-12-16 22:42:42.558876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.558932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.559129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.559162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.559382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.559416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.559541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.559571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.948 [2024-12-16 22:42:42.559740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.948 [2024-12-16 22:42:42.559772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.948 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.559892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.559924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.560047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.560078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.560288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.560322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.560600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.560631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.560872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.560904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.561145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.561176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.561433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.561466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.561709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.561740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.561920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.561952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.562217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.562251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.562360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.562392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.562587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.562619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.562791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.562821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.563013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.563045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.563217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.563250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.563451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.563482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.563613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.563645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.563926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.563957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.564096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.564127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.564319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.564352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.564634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.564666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.564936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.564966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.565144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.565176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.565311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.565344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.565522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.565553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.565808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.565839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.565961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.565993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.566168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.566213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.566436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.566474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.566716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.566747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.566939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.566971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.567142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.567173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.567308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.567340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.567523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.567554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.567752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.567784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.568045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.568076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.568211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.568245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.568414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.568445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.949 [2024-12-16 22:42:42.568685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.949 [2024-12-16 22:42:42.568716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.949 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.568939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.568971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.569239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.569272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.569412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.569444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.569666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.569699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.569992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.570024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.570218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.570251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.570473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.570504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.570755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.570786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.570979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.571010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.571205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.571238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.571440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.571471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.571717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.571748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.571994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.572026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.572266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.572299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.572567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.572598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.572793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.572824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.573086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.573118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.573385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.573420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.573699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.573729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.574016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.574048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.574247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.574281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.574504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.574535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.574778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.574810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.574967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.574998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.575265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.575298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.575572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.575603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.575789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.575821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.576018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.576049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.576268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.576300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.576431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.576468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.576689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.576721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.576918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.576949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.577122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.577153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.577433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.577467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.950 [2024-12-16 22:42:42.577703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.950 [2024-12-16 22:42:42.577734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.950 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.577982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.578012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.578284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.578318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.578495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.578527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.578726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.578757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.578879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.578911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.579179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.579219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.579497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.579529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.579753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.579784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.580075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.580106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.580377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.580411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.580698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.580729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.580959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.581156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.581188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.581464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.581495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.581769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.581800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.581984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.582014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.582261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.582295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.582492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.582523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.582794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.582826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.583095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.583126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.583422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.583455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.583605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.583637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.583851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.583882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.584056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.584087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.584215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.584249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.584550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.584581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.584845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.584877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.585148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.585179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.585478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.585510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.585780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.585812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.585995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.586027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.586298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.586332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.586528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.586560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.586738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.586769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.587037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.587074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.587323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.587357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.587558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.587589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.587856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.587888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.951 [2024-12-16 22:42:42.588127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.951 [2024-12-16 22:42:42.588158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.951 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.588379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.588413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.588607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.588638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.588834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.588865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.589117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.589148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.589357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.589390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.589590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.589620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.589795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.589827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.590080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.590112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.590365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.590399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.590653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.590685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.590978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.591011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.591309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.591342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.591609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.591641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.591931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.591963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.592140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.592171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.592469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.592502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.592769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.592800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.592993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.593024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.593242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.593275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.593498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.593530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.593775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.593807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.594000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.594031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.594236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.594270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.594534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.594568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.594816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.594847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.595098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.595129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.595398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.595432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.595611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.595642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.595912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.596245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.596279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.596545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.596576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.596826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.596857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.597061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.597092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.597272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.597307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.597579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.597610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.597938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.597975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.598173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.598217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.598829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.952 [2024-12-16 22:42:42.598861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.952 qpair failed and we were unable to recover it. 00:36:52.952 [2024-12-16 22:42:42.599064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.599096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.599392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.599426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.599650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.599681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.599943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.599976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.600277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.600310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.600524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.600556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.600701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.600732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.600928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.600959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.601139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.601171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.601404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.601437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.601719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.601751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.602032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.602064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.602290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.602323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.602599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.602630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.602850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.602883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.603162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.603204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.603406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.603439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.603665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.603697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.603994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.604026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.604250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.604284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.604488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.604520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.604795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.604827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.604945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.604977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.605187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.605232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.605540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.605572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.605772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.605803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.606059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.606092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.606208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.606242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.606531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.606563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.606828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.606859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.607118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.607149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.607390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.607423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.607646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.607946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.607977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.608158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.608190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.608516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.608548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.608820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.608857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.609069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.609100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.609352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.953 [2024-12-16 22:42:42.609386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.953 qpair failed and we were unable to recover it. 00:36:52.953 [2024-12-16 22:42:42.609565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.609596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.609894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.609925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.610124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.610156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.610441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.610474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.610665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.610697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.610956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.610988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.611244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.611277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.611580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.611612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.611895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.611927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.612120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.612152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.612371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.612404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.612590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.612622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.612878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.612910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.613112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.613144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.613335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.613370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.613573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.613604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.613794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.613826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.614099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.614130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.614336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.614370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.614669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.614701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.614880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.614911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.615166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.615209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.615340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.615373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.615646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.615677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.615965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.615998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.616279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.616313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.616590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.616623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.616808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.616840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.617051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.617083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.617364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.617397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.617582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.617614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.617863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.617895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.618203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.618235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.618435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.618467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.954 [2024-12-16 22:42:42.618646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.954 [2024-12-16 22:42:42.618677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.954 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.618881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.618912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.619185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.619226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.619451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.619488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.619615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.619646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.619926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.619958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.620188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.620241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.620442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.620474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.620743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.620775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.621067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.621099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.621374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.621408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.621680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.621711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.622003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.622036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.622151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.622182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.622397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.622430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.622760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.622791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.623083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.623115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.623395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.623428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.623712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.623742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.624025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.624056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.624343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.624377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.624624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.624655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.624918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.624950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.625160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.625220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.625447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.625480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.625698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.625729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.625951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.625984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.626249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.626529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.626561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.626833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.626865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:52.955 [2024-12-16 22:42:42.627159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.955 [2024-12-16 22:42:42.627200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:52.955 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.627466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.627497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.627765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.627797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.627929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.627960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.628145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.628177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.628502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.628535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.628715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.628747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.628998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.629029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.629251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.629284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.629465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.629496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.629674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.629707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.629882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.629913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.630111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.630142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.630425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.630465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.630723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.630755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.631002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.631033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.631289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.631322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.631528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.631559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.631778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.631810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.632035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.632306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.632339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.632587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.632619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.632823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.632855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.633061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.313 qpair failed and we were unable to recover it. 00:36:53.313 [2024-12-16 22:42:42.633329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.313 [2024-12-16 22:42:42.633363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.633555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.633586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.633787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.633819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.634097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.634128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.634388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.634421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.634676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.634707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.634985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.635017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.635214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.635246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.635425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.635457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.635669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.635701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.635955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.635986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.636285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.636318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.636588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.636619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.636870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.636902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.637047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.637078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.637262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.637295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.637574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.637605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.637798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.637829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.638104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.638137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.638323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.638356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.638626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.638657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.638911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.638943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.639152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.639183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.639400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.639433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.639613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.639644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.639770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.639801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.640075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.640106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.640383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.640417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.641011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.641048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.641317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.641350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.641595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.641627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.641899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.641931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.642097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.314 qpair failed and we were unable to recover it. 00:36:53.314 [2024-12-16 22:42:42.642374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.314 [2024-12-16 22:42:42.642407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.642518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.642550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.642746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.642777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.642990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.643021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.643149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.643181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.643393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.643424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.643701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.643733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.643931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.643963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.644231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.644264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.644468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.644500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.644759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.644791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.645086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.645118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.645335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.645369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.645581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.645612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.645888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.645919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.646214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.646247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.646468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.646500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.646777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.646808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.647083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.647115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.647407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.647441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.647712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.647743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.647956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.647988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.648371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.648464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.648693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.648730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.648934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.648966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.649175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.649219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.649401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.649434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.649632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.649664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.649941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.649973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.650226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.650350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.650382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.650649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.650682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.650883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.650915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.651171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.651214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.651398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.315 [2024-12-16 22:42:42.651431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.315 qpair failed and we were unable to recover it. 00:36:53.315 [2024-12-16 22:42:42.651708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.651749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.651965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.651997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.652252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.652287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.652537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.652570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.652761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.652794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.653000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.653031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.653227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.653261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.653458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.653490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.653688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.653720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.653918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.653950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.654157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.654189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.654379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.654411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.654685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.654717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.654918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.654950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.655087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.655120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.655374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.655408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.655518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.655550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.655826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.655858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.656053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.656084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.656286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.656320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.656526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.656558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.656668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.656701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.656893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.656925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.657121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.657154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.657358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.657391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.657571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.657602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.657789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.657822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.658111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.658145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.658449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.658484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.658741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.658774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.659076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.659108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.659312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.659346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.659529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.659561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.659743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.659775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.316 qpair failed and we were unable to recover it. 00:36:53.316 [2024-12-16 22:42:42.659970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.316 [2024-12-16 22:42:42.660002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.660276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.660311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.660597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.660629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.660880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.660912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.661188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.661232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.661510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.661543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.661739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.661777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.661970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.662003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.662274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.662309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.662431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.662463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.662716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.662748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.662947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.662980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.663089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.663121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.663306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.663340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.663528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.663560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.663838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.663871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.664159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.664204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.664409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.664443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.664624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.664656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.664906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.665123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.665155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.665417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.665451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.665701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.665734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.665969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.666246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.666280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.666388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.666420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.666669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.666702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.666903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.666936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.667118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.667150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.667453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.667487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.667680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.667712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.667939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.667972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.668250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.668284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.668490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.317 [2024-12-16 22:42:42.668523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.317 qpair failed and we were unable to recover it. 00:36:53.317 [2024-12-16 22:42:42.668718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.668750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.668930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.668962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.669212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.669246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.669544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.669577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.669869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.669902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.670098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.670130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.670439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.670474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.670726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.670759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.671023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.671055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.671354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.671389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.671586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.671618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.671886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.671918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.672206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.672245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.672519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.672551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.672831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.672863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.673076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.673107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.673380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.673414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.673615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.673647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.673902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.673933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.674211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.674245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.674493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.674526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.674792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.674824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.675124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.675157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.675429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.675462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.675714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.675746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.675924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.675956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.676105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.676137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.318 qpair failed and we were unable to recover it. 00:36:53.318 [2024-12-16 22:42:42.676430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.318 [2024-12-16 22:42:42.676464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.676656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.676688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.676895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.677212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.677246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.677430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.677462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.677640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.677673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.677868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.677900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.678178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.678239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.678442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.678475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.678693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.678725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.678923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.678955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.679155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.679187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.679404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.679443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.679694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.679726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.679852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.679883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.680166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.680209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.680479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.680512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.680779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.680811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.681107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.681139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.681412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.681446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.681663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.681695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.681966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.681998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.682184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.682228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.682480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.682512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.682711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.682743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.683025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.683057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.683378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.683413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.683704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.683735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.683961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.683993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.684267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.684302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.684529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.684561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.684668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.684700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.684993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.685026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.685135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.685166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.685453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.685486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.319 [2024-12-16 22:42:42.685665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.319 [2024-12-16 22:42:42.685697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.319 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.685891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.685923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.686174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.686226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.686431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.686464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.686674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.686706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.686912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.686945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.687173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.687218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.687433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.687682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.687715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.687844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.687876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.688058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.688090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.688365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.688400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.688581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.688613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.688888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.688920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.689119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.689152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.689397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.689431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.689611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.689644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.689838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.689877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.690056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.690089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.690366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.690400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.690680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.690713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.690997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.691030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.691214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.691248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.691431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.691464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.691679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.691712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.691988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.692021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.692300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.692334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.692589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.692622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.692873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.692906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.693128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.693160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.693371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.693405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.693612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.693644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.693843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.693875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.694051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.320 [2024-12-16 22:42:42.694084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.320 qpair failed and we were unable to recover it. 00:36:53.320 [2024-12-16 22:42:42.694263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.694297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.694585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.694616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.694913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.694945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.695217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.695252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.695542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.695574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.695844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.695876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.696057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.696090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.696218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.696252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.696479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.696516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.696794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.696827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.697054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.697086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.697364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.697398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.697602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.697635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.697762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.697794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.698067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.698099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.698283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.698318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.698588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.698620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.698896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.698928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.699222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.699256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.699529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.699561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.699850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.699882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.700128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.700159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.700369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.700403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.700677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.700715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.700916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.700948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.701215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.701249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.701436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.701468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.701663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.701695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.701976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.702008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.702282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.702316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.702601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.702633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.702910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.702941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.703136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.703454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.703487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.703767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.703799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.321 [2024-12-16 22:42:42.703993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.321 [2024-12-16 22:42:42.704024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.321 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.704274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.704308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.704444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.704477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.704750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.704781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.705075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.705106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.705381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.705416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.705703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.705734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.705989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.706021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.706300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.706335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.706611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.706643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.706911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.706943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.707213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.707247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.707571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.707602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.707877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.707909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.708211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.708246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.708509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.708542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.708830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.708862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.709061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.709093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.709367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.709402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.709663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.709862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.709894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.710104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.710135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.710359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.710394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.710522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.710554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.710871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.710903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.711111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.711144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.711371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.711405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.711680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.711713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.711917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.711961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.712187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.712229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.712419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.712451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.712671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.712704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.713005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.713037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.713239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.713273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.713486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.713519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.322 [2024-12-16 22:42:42.713650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.322 [2024-12-16 22:42:42.713683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.322 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.713883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.713915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.714245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.714279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.714570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.714603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.714879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.714911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.715172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.715213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.715467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.715499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.715696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.715728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.715933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.715965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.716164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.716204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.716506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.716538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.716796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.716828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.717127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.717159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.717353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.717388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.717679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.717711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.717945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.717977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.718189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.718263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.718464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.718497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.718695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.718728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.718983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.719014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.719271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.719307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.719606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.719639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.719820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.719851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.720119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.720150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.720374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.720407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.720611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.720643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.720867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.720900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.721185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.721226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.721407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.721439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.323 [2024-12-16 22:42:42.721636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.323 [2024-12-16 22:42:42.721668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.323 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.721958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.721990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.722243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.722278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.722486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.722519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.722792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.722831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.723027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.723061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.723312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.723347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.723663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.723695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.723900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.723934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.724065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.724098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.724365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.724399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.724630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.724765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.724797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.724928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.724961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.725247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.725281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.725493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.725527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.725708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.725740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.725932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.725964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.726227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.726261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.726461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.726494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.726605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.726636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.726820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.726851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.727121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.727154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.727434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.727469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.727749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.727781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.728062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.728094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.728353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.728387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.728639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.728671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.728922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.728954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.729254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.729289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.729575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.729607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.729811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.729844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.730042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.730075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.730272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.730306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.730489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.730521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.324 qpair failed and we were unable to recover it. 00:36:53.324 [2024-12-16 22:42:42.730700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.324 [2024-12-16 22:42:42.730732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.730910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.730942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.731147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.731179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.731407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.731440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.731697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.731731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.731919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.731950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.732078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.732112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.732262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.732297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.732427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.732458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.732724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.732762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.732988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.733020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.733320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.733353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.733618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.733650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.733952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.733984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.734179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.734222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.734425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.734458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.734592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.734625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.734902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.734935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.735230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.735265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.735397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.735429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.735680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.735712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.735989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.736021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.736218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.736253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.736515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.736547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.736855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.736887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.737149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.737181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.737399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.737432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.737594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.737873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.737906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.738110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.738143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.738365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.738401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.738578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.738610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.738745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.738777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.739031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.739063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.739340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.739375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.325 qpair failed and we were unable to recover it. 00:36:53.325 [2024-12-16 22:42:42.739576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.325 [2024-12-16 22:42:42.739608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.739818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.739850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.740033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.740065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.740176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.740223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.740372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.740405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.740667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.740699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.740891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.740923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.741208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.741243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.741449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.741482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.741739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.741771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.741959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.742188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.742255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.742398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.742431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.742635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.742667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.742848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.742886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.743070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.743103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.743293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.743328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.743440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.743470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.743719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.743752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.743952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.743985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.744189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.744235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.744435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.744467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.744653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.744686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.744893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.744925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.745126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.745158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.745358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.745392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.745597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.745630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.745834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.745866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.746001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.746034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.746290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.746323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.746523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.746555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.746737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.746770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.747023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.747056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.747247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.747282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.747411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.326 [2024-12-16 22:42:42.747443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.326 qpair failed and we were unable to recover it. 00:36:53.326 [2024-12-16 22:42:42.747622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.747848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.747880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.748131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.748162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.748373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.748406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.748676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.748708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.748903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.748935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.749215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.749251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.749536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.749569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.749800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.749831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.749958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.749990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.750304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.750338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.750521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.750552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.750745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.750779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.751029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.751063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.751187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.751244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.751430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.751462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.751721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.751754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.751893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.751927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.752186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.752240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.752498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.752542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.752752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.752788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.752984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.753017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.753224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.753262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.753479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.753516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.753709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.753744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.753935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.753968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.754100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.754352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.754386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.754600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.754633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.754875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.754909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.755031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.755061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.755256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.755290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.755543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.755576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.755767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.327 [2024-12-16 22:42:42.755801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.327 qpair failed and we were unable to recover it. 00:36:53.327 [2024-12-16 22:42:42.756009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.756043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.756317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.756352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.756467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.756500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.756749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.756782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.756993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.757026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.757217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.757253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.757380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.757414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.757618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.757651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.757781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.757815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.758023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.758055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.758209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.758244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.758444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.758477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.758686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.758719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.758902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.758934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.759140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.759172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.759448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.759483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.759686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.759719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.759909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.759943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.760219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.760254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.760462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.760495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.760623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.760655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.760850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.760883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.761110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.761143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.761264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.761299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.761511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.761544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.761770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.761815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.762126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.762158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.762370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.762404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.762711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.762743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.762997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.763029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.328 [2024-12-16 22:42:42.763234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.328 [2024-12-16 22:42:42.763269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.328 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.763444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.763477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.763591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.763623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.763833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.763866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.764065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.764098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.764312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.764346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.764471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.764504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.764635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.764669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.764806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.764838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.765026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.765058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.765236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.765271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.765452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.765484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.765663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.765697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.765891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.765925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.766043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.766075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.766323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.766357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.766558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.766592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.766789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.766823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.766945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.766978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.767180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.767229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.767417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.767452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.767755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.767787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.767998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.768031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.768329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.768364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.768520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.768553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.768676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.768707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.768888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.768921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.769102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.769135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.769372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.769405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.769586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.769618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.769872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.769906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.770098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.770132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.770258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.770293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.770479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.770513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.770624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.770890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.329 [2024-12-16 22:42:42.770929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.329 qpair failed and we were unable to recover it. 00:36:53.329 [2024-12-16 22:42:42.771057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.771090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.771302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.771336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.771574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.771609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.771812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.771847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.772102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.772134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.772370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.772404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.772529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.772562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.772869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.772903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.773163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.773209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.773391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.773424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.773555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.773588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.773877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.773913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.774205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.774239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.774452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.774485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.774765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.775065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.775098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.775309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.775346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.775624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.775658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.775861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.775894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.776018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.776052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.776343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.776379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.776511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.776545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.776724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.776756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.776950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.776983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.777173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.777218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.777368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.777404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.777606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.777640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.777860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.777893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.778072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.778105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.778380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.778414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.778738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.778777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.779037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.779071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.779300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.779334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.779542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.330 [2024-12-16 22:42:42.779576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.330 qpair failed and we were unable to recover it. 00:36:53.330 [2024-12-16 22:42:42.779759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.779791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.780061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.780094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.780326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.780361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.780546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.780578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.780707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.780742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.781069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.781208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.781243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.781500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.781662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.781693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.781890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.781923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.782209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.782243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.782447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.782479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.782660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.782694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.782881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.782913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.783107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.783140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.783368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.783560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.783592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.783798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.783830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.784030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.784065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.784216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.784250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.784457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.784490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.784668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.784704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.784896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.784929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.785114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.785147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.785364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.785398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.785523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.785555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.785833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.785865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.786063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.786094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.786298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.786333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.786557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.786589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.786811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.786844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.786981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.787014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.787226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.787261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.787480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.787513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.787650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.331 [2024-12-16 22:42:42.787682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.331 qpair failed and we were unable to recover it. 00:36:53.331 [2024-12-16 22:42:42.787915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.787948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.788296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.788477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.788509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.788728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.788946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.788977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.789257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.789291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.789494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.789527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.789735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.789768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.789997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.790030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.790303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.790338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.790466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.790503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.790634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.790668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.790780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.790812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.791027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.791059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.791243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.791277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.791406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.791440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.791564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.791597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.791935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.791968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.792235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.792270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.792481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.792514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.792626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.792659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.792871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.792902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.793113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.793144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.793349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.793383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.793592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.793627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.793906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.793940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.794134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.794166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.794365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.794399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.794606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.794640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.794839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.794872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.794989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.795022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.795247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.795282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.795486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.795521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.795703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.795739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.332 qpair failed and we were unable to recover it. 00:36:53.332 [2024-12-16 22:42:42.795919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.332 [2024-12-16 22:42:42.795953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.796140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.796173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.796459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.796495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.796704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.796738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.796867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.796900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.797087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.797120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.797331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.797366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.797570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.797602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.797803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.797836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.798155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.798188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.798491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.798524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.798722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.798754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.799027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.799266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.799301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.799600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.799634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.799764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.799796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.799976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.800015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.800298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.800355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.800497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.800530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.800831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.800864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.801062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.801101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.801373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.801408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.801537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.801571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.801748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.801781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.802034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.802065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.802269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.802306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.802502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.802535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.802736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.802769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.803019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.803052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.333 qpair failed and we were unable to recover it. 00:36:53.333 [2024-12-16 22:42:42.803322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.333 [2024-12-16 22:42:42.803357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.803699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.803732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.803911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.803944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.804147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.804185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.804346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.804380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.804581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.804613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.804746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.804779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.804956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.804988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.805177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.805232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.805359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.805400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.805592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.805624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.805893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.805929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.806113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.806150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.806369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.806406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.806657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.806696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.807019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.807053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.807256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.807294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.807535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.807569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.807767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.807803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.808024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.808060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.808343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.808378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.808639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.808673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.808902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.808936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.809047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.809080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.809297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.809331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.809518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.809551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.809732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.809764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.809892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.809925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.810122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.810156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.810430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.810465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.810588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.810623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.810811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.810845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.811037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.811070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.811282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.334 [2024-12-16 22:42:42.811317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.334 qpair failed and we were unable to recover it. 00:36:53.334 [2024-12-16 22:42:42.811504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.811537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.811747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.811779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.811973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.812007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.812210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.812244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.812529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.812562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.812833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.812866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.813068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.813109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.813335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.813371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.813570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.813602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.813784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.813817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.814030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.814063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.814270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.814305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.814588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.814622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.814744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.814778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.814996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.815029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.815157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.815203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.815316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.815347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.815569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.815778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.815811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.815936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.815968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.816146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.816184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.816472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.816506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.816634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.816668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.816920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.816952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.817068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.817101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.817230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.817265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.817390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.817423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.817694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.817727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.817860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.817892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.818073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.818105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.818309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.818343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.818475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.818508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.818630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.818662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.818783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.818816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.335 [2024-12-16 22:42:42.819000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.335 [2024-12-16 22:42:42.819033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.335 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.819142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.819174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.819364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.819398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.819516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.819547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.819738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.819770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.819948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.819981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.820161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.820221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.820401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.820434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.820621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.820654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.820777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.820810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.820960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.820993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.821168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.821216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.821328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.821362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.821494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.821527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.821650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.821683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.821803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.821836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.822022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.822055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.822237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.822273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.822463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.822496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.822681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.822716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.822894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.822927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.823107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.823139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.823355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.823389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.823571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.823603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.823716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.823748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.823861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.823894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.824170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.824220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.824403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.824436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.824651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.824760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.824793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.824973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.825005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.825181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.825226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.825420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.825454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.825647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.825680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.825885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.336 [2024-12-16 22:42:42.825919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.336 qpair failed and we were unable to recover it. 00:36:53.336 [2024-12-16 22:42:42.826103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.826136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.826271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.826305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.826482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.826515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.826788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.826821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.826950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.826983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.827180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.827228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.827422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.827456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.827730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.827763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.827878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.827911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.828033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.828067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.828231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.828266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.828593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.828626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.828800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.828833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.828942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.828975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.829153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.829186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.829331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.829365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.829495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.829529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.829717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.829749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.829866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.829899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.830151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.830327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.830362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.830483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.830517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.830696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.830730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.830960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.830992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.831175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.831223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.831404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.831437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.831551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.831584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.831786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.831819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.832011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.832044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.832236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.832270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.832456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.832490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.832624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.832662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.832847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.832879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.833074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.833106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.337 [2024-12-16 22:42:42.833294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.337 [2024-12-16 22:42:42.833328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.337 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.833461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.833493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.833610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.833643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.833763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.833795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.833986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.834018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.834128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.834159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.834291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.834325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.834512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.834546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.834725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.834758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.834869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.834902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.835021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.835054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.835250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.835285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.835489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.835523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.835633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.835666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.835777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.835810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.836010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.836042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.836381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.836421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.836548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.836582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.836688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.836720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.836828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.836861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.837039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.837072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.837263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.837299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.837422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.837460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.837637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.837669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.837858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.837891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.838017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.838050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.838159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.838189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.838383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.838417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.838523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.838556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.838674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.338 [2024-12-16 22:42:42.838707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.338 qpair failed and we were unable to recover it. 00:36:53.338 [2024-12-16 22:42:42.838906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.838938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.839047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.839080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.839214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.839248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.839367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.839402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.839523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.839556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.839677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.839710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.839923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.839956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.840136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.840174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.840367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.840401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.840601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.840635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.840746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.840779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.840888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.840920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.841109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.841142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.841334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.841369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.841493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.841525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.841708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.841741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.841850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.841883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.842069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.842101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.842238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.842274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.842451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.842485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.842607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.842639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.842826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.842859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.842970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.843002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.843212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.843247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.843427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.843459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.843580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.843613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.843799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.843831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.844059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.844092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.844293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.844327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.844502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.844536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.844718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.844750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.845024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.845057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.845235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.339 [2024-12-16 22:42:42.845269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.339 qpair failed and we were unable to recover it. 00:36:53.339 [2024-12-16 22:42:42.845383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.845417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.845619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.845653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.845847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.845879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.845991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.846022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.846145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.846176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.846406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.846440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.846691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.846725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.846850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.846882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.847075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.847108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.847283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.847318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.847500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.847533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.847652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.847687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.847805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.847838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.847960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.847992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.848177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.848229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.848362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.848395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.848507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.848539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.848657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.848689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.848793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.848825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.849003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.849036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.849218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.849252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.849363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.849397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.849600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.849632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.849743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.849775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.849953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.849985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.850117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.850150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.850356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.850390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.850567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.850601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.850791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.850823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.850950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.850983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.851177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.851226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.851338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.851376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.851566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.851598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.340 [2024-12-16 22:42:42.851780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.340 [2024-12-16 22:42:42.851812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.340 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.851990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.852022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.852208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.852243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.852363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.852397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.852645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.852677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.852784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.852816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.852995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.853031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.853147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.853179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.853378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.853413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.853593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.853626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.853733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.853766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.853954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.853986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.854090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.854122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.854297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.854333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.854462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.854495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.854616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.854649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.854766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.854798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.854973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.855006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.855115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.855146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.855341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.855377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.855627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.855660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.855764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.855802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.855918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.855950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.856149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.856181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.856326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.856359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.856491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.856523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.856647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.856680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.856866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.856898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.857018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.857050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.857174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.857215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.857332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.857365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.857541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.857574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.857750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.857782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.857974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.858005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.341 [2024-12-16 22:42:42.858113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.341 [2024-12-16 22:42:42.858145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.341 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.858384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.858419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.858527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.858559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.858699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.858805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.858837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.858944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.858976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.859078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.859109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.859301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.859335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.859510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.859541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.859720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.859751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.859864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.859896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.860001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.860033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.860212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.860245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.860548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.860580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.860703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.860735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.860840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.860872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.861047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.861079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.861351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.861385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.861565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.861596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.861825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.861951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.861984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.862156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.862187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.862304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.862336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.862535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.862568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.862687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.862719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.862894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.862926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.863038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.863071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.863260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.863300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.863442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.863475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.863669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.863701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.863876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.863908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.864083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.864116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.864253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.864287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.864394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.342 [2024-12-16 22:42:42.864673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.342 [2024-12-16 22:42:42.864706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.342 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.864878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.864909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.865078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.865111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.865235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.865270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.865385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.865416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.865532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.865564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.865831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.865864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.866081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.866114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.866322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.866425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.866457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.866645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.866677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.866784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.866815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.866984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.867035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.867230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.867265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.867527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.867558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.867732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.867765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.867888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.867919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.868026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.868058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.868167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.868208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.868384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.868418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.868607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.868639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.868753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.868785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.868901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.868933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.869048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.869080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.869213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.869246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.869537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.869570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.869682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.343 [2024-12-16 22:42:42.869714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.343 qpair failed and we were unable to recover it. 00:36:53.343 [2024-12-16 22:42:42.869884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.869916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.870021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.870053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.870169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.870222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.870473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.870504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.870607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.870639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.870762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.870794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.870970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.871007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.871130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.871163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.871353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.871387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.871566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.871597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.871719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.871751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.871862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.871893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.872014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.872162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.872205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.872378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.872515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.872547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.872664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.872695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.872881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.872914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.873204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.873237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.873413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.873444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.873623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.873655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.873834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.873865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.874115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.874236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.874270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.874440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.874472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.874669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.874702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.874827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.874858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.874973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.875005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.875252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.875285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.875479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.875511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.875764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.875796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.875955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.876070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.876101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.344 qpair failed and we were unable to recover it. 00:36:53.344 [2024-12-16 22:42:42.876277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.344 [2024-12-16 22:42:42.876311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.876580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.876814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.877014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.877045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.877237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.877271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.877444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.877476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.877593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.877625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.877820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.877852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.877970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.878002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.878260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.878294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.878419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.878451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.878585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.878826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.878859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.879055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.879092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.879210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.879354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.879386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.879573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.879604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.879774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.879806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.880020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.880053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.880159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.880202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.880312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.880343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.880466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.880602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.880634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.880839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.880871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.881113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.881145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.881347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.881575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.881608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.881735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.881767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.881982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.882014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.882188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.882233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.882364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.882395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.882593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.882626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.882887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.882919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.883024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.345 [2024-12-16 22:42:42.883055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.345 qpair failed and we were unable to recover it. 00:36:53.345 [2024-12-16 22:42:42.883227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.883261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.883413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.883679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.883711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.883814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.883846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.884020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.884052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.884223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.884256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.884462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.884495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.884663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.884695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.884893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.884924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.885030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.885062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.885246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.885280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.885472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.885504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.885611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.885643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.885757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.885789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.885903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.885934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.886175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.886228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.886402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.886434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.886558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.886590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.886696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.886728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.886925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.886961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.887148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.887181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.887387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.887419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.887680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.887711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.887893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.887925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.888097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.888127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.888238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.888271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.888460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.888492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.888755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.888786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.888961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.888993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.889188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.889229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.889351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.889383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.889594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.889626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.889729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.889760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.346 qpair failed and we were unable to recover it. 00:36:53.346 [2024-12-16 22:42:42.889956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.346 [2024-12-16 22:42:42.889988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.890090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.890122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.890238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.890272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.890380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.890411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.890551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.890583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.890758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.890983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.891015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.891183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.891228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.891401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.891433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.891618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.891650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.891756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.891788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.891920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.891951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.892133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.892164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.892355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.892388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.892646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.892677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.892788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.892820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.892994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.893026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.893214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.893247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.893364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.893396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.893510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.893542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.893643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.893675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.893846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.893877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.893985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.894016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.894130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.894161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.894371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.894405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.894573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.894605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.894806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.894844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.895086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.895117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.895291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.895325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.895472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.895505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.895678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.895710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.895916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.896158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.896200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.347 [2024-12-16 22:42:42.896377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.347 [2024-12-16 22:42:42.896408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.347 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.896582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.896614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.896745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.896777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.896878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.896909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.897009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.897042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.897214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.897248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.897435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.897466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.897663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.897695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.897826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.897858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.897959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.897990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.898097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.898129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.898301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.898335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.898576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.898608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.898801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.898833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.899004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.899035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.899221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.899254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.899541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.899573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.899745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.899777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.899914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.899946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.900065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.900096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.900278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.900311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.900570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.900602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.900886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.900917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.901029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.901060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.901259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.901293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.901410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.901441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.901706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.901737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.901912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.901943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.902129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.348 [2024-12-16 22:42:42.902161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.348 qpair failed and we were unable to recover it. 00:36:53.348 [2024-12-16 22:42:42.902287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.902319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.902529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.902560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.902668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.902701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.902949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.902980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.903189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.903237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.903476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.903508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.903683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.903714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.904005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.904037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.904238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.904272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.904459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.904494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.904663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.904695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.904970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.905073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.905104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.905273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.905307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.905418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.905450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.905699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.905730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.905852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.905883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.906120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.906152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.906429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.906462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.906653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.906685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.906859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.906890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.907082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.907113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.907224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.907256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.907426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.907456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.907563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.907595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.907765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.907796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.907963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.907995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.908212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.908246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.908347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.908377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.908564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.908596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.908711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.908743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.908916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.908947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.909113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.349 [2024-12-16 22:42:42.909145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.349 qpair failed and we were unable to recover it. 00:36:53.349 [2024-12-16 22:42:42.909333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.909367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.909562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.909594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.909789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.909821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.909932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.909963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.910098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.910129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.910233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.910266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.910377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.910408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.910655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.910686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.910926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.910958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.911139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.911169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.911357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.911390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.911628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.911660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.911931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.911963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.912229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.912262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.912458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.912490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.912604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.912635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.912739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.912770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.912941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.912974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.913077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.913108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.913317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.913350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.913471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.913503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.913623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.913654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.913757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.913789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.913963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.913995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.914187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.914232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.914409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.914441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.914541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.914574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.914745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.914775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.914898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.914930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.915098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.915131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.915249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.915282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.915385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.915416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.915523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.915555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.915726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.915757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.915924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.915956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.916074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.916106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.916290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.916324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.916498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.916530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.916782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.916818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.916989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.917020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.917123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.917154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.917365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.917399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.917506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.917538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.917734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.917766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.917899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.350 [2024-12-16 22:42:42.918066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.350 [2024-12-16 22:42:42.918097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.350 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.918356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.918390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.918631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.918663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.918777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.918808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.918977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.919008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.919184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.919226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.919333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.919364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.919488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.919520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.919700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.919732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.919921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.919953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.920120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.920152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.920430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.920464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.920631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.920663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.920775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.920806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.921048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.921080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.921264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.921298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.921404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.921435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.921606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.921638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.921812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.921843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.921956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.921988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.922178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.922220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.922348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.922381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.922566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.922597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.922796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.922828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.922995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.923027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.923146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.923178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.923291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.923324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.923493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.923525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.923698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.923732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.923867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.923899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.924068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.924098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.924380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.924413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.924590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.924622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.924738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.924774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.924941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.924972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.925075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.925107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.925279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.925313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.925430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.925462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.925633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.925664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.925786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.925817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.926016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.926048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.926230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.926263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.926378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.926409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.926540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.926729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.926883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.926915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.927169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.927209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.927388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.927420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.927587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.927619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.927789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.927821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.927991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.928022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.928128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.928160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.928363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.928396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.928563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.928594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.928764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.928796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.928905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.928936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.929105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.929136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.929253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.929287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.929456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.929486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.929655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.929687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.929867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.929899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.930136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.930167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.930287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.930319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.930420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.930452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.930638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.930669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.930858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.930890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.931073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.931104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.931231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.931264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.931380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.351 [2024-12-16 22:42:42.931411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.351 qpair failed and we were unable to recover it. 00:36:53.351 [2024-12-16 22:42:42.931588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.931620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.931880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.931911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.932026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.932058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.932251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.932284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.932474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.932510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.932779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.932970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.933002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.933114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.933145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.933342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.933375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.933615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.933760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.933791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.933981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.934012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.934278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.934312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.934434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.934465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.934629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.934661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.934826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.934857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.935044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.935076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.935176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.935218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.935789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.935821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.936063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.936095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.936374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.936408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.936576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.936607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.936818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.936850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.937043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.937074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.937186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.937233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.937346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.937379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.937583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.937614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.937801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.937833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.937979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.938011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.938247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.938280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.938470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.938501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.938609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.938641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.938823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.938854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.939046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.939077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.939319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.939351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.939526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.939557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.939668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.939699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.939891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.939922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.940159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.940210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.940330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.940362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.940553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.940584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.940846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.940877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.941042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.941073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.941239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.941273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.941488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.941525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.941642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.941672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.941791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.941822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.942029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.942060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.942298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.942331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.942538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.942570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.942741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.942773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.942948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.942979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.943260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.943294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.943418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.943450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.943645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.943677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.943869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.943900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.944200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.944342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.944371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.944568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.944599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.944702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.352 [2024-12-16 22:42:42.944732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.352 qpair failed and we were unable to recover it. 00:36:53.352 [2024-12-16 22:42:42.944837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.944867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.944985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.945016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.945218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.945252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.945421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.945452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.945563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.945795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.945826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.945933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.945964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.946094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.946126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.946386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.946419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.946585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.946617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.946731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.946763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.946873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.946905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.947072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.947308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.947341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.947456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.947487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.947657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.947688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.947878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.947910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.948078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.948109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.948225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.948258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.948424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.948456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.948572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.948604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.948792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.948825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.948937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.948968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.949211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.949243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.949445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.949483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.949695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.949726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.949898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.949929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.950033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.950167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.950207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.950405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.950437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.950554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.950586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.950755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.950785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.950997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.951029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.951213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.951247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.951357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.951389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.951502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.951534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.951719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.951750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.951919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.951950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.952159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.952202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.952377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.952410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.952522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.952553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.952652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.952683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.952780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.952812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.952997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.953027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.953222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.953256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.953432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.953463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.953580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.953611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.953805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.953837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.954002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.954033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.954316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.954485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.954516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.954702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.954734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.954901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.954932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.955122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.955153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.955277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.955309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.955410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.955442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.955607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.955638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.955748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.955780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.955901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.955932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.956118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.956149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.956259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.956291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.956485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.956516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.956764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.956795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.956971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.957002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.957199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.957238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.957464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.957495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.957617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.957648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.957752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.957783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.957971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.958002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.353 [2024-12-16 22:42:42.958204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.353 [2024-12-16 22:42:42.958237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.353 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.958438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.958469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.958633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.958664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.958853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.958884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.959004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.959035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.959291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.959325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.959516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.959548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.959664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.959694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.959896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.959927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.960049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.960265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.960299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.960567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.960598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.960705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.960736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.960907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.960938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.961104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.961135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.961262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.961296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.961412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.961442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.961547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.961578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.961781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.961812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.961922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.961953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.962071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.962102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.962348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.962381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.962578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.962610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.962712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.962743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.962855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.962885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.962986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.963119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.963261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.963392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.963590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.963931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.963963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.964144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.964175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.964439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.964471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.964599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.964630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.964826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.964862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.965056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.965086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.965188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.965231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.965332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.965364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.965622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.965653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.965823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.965854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.966030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.966062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.966166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.966207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.966322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.966354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.966454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.966485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.966673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.966704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.966946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.966977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.967092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.967123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.967241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.967275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.967477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.967509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.967625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.967657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.967852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.967882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.968071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.968103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.968272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.968305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.968568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.968599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.968691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.968723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.968963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.968994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.969226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.969259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.969377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.969408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.969578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.969609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.969775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.969806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.969906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.969938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.970139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.354 [2024-12-16 22:42:42.970172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.354 qpair failed and we were unable to recover it. 00:36:53.354 [2024-12-16 22:42:42.970351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.970384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.970642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.970673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.970863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.970895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.971011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.971042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.971215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.971250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.971486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.971518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.971685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.971716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.971881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.971912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.972031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.972261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.972294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.972560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.972592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.972762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.972794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.972960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.972997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.973110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.973141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.973412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.973616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.973647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.973833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.973865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.973988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.974021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.974127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.974158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.974290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.974324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.974513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.974546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.974725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.974757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.974877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.974909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.975119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.975289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.975433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.975464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.975655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.975688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.975796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.975827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.975932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.975963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.976072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.976103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.976295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.976329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.976575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.976606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.976722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.976755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.976868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.976903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.977006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.977038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.977251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.977285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.977385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.977417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.977519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.977551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.977761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.977792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.977969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.978003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.978262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.978297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.978406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.978438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.978610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.978643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.978821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.978855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.979025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.979058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.979167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.979212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.979334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.979366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.979556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.979590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.979825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.979862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.979992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.980024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.980136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.980168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.980355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.980388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.980587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.980626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.980804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.980837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.981010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.981042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.981158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.981190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.981326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.981358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.981468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.981500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.981621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.981653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.981913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.981947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.982124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.982156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.982278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.982312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.982415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.982448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.982578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.982609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.982829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.982862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.983101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.983135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.983341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.983375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.983544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.983576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.983790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.983824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.984083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.355 [2024-12-16 22:42:42.984116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.355 qpair failed and we were unable to recover it. 00:36:53.355 [2024-12-16 22:42:42.984231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.984266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.984458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.984491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.984601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.984636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.984815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.984848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.984977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.985012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.985297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.985334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.985522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.985555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.985734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.985769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.985896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.985929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.986107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.986139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.986253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.986287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.986527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.986560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.986678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.986713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.986831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.986866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.987044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.987076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.987262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.987296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.987488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.987526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.987645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.987687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.987806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.987841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.987947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.987979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.988149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.988189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.988449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.988488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.988597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.988639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.988754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.988793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.988917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.988949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.989059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.989092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.989221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.989254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.989360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.989392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.989652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.989688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.989880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.989914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.990036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.990067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.990247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.990282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.356 [2024-12-16 22:42:42.990452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.356 [2024-12-16 22:42:42.990486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.356 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.990698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.990734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.990905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.990937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.991231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.991266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.991538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.991691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.991732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.991930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.991962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.992141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.992174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.992310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.992342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.992510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.992543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.992725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.992760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.992940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.641 [2024-12-16 22:42:42.992971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.641 qpair failed and we were unable to recover it. 00:36:53.641 [2024-12-16 22:42:42.993160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.993204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.993324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.993357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.993571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.993603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.993775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.993807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.993976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.994008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.994231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.994303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.994432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.994468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.994657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.994695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.994865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.995066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.995098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.995296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.995336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.995463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.995496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.995747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.995783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.995925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.995957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.996127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.996163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.996440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.996486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.996662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.996694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.996802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.996834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.997116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.997158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.997287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.997324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.997440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.997472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.997573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.997605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.997807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.997840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.997943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.997974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.998162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.998201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.998375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.998407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.998578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.998610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.998776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.998807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.998936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.998967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:42.999910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:42.999941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:43.000043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:43.000074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:43.000244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:43.000278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:43.000383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:43.000415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:43.000589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:43.000621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:43.000724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.642 [2024-12-16 22:42:43.000755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.642 qpair failed and we were unable to recover it. 00:36:53.642 [2024-12-16 22:42:43.000854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.000887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.001062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.001093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.001254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.001287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.001395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.001427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.001543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.001575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.001778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.001810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.001990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.002022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.002209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.002243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.002415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.002447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.002552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.002584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.002706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.002738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.002839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.002870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.003035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.003067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.003178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.003221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.003341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.003373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.003568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.003599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.003788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.003821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.004041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.004257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.004394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.004598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.004746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.004890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.004995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.005027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.005144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.005175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.005375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.005408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.005672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.005705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.005969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.006000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.006177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.006218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.006429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.006462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.006633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.006665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.006833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.006865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.006978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.007011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.007181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.007222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.007341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.007373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.007565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.007596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.007781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.007813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.007919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.007950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.008122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.008154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.008285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.008336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.008463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.008495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.008600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.643 [2024-12-16 22:42:43.008631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.643 qpair failed and we were unable to recover it. 00:36:53.643 [2024-12-16 22:42:43.008755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.008786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.008898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.008929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.009175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.009227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.009394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.009431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.009539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.009571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.009735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.009766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.009887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.009919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.010023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.010054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.010220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.010253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.010367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.010398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.010524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.010556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.010728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.010760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.011019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.011051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.011168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.011209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.011380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.011412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.011587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.011618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.011788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.011820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.011937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.011969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.012139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.012171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.012383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.012414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.012670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.012702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.012876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.012907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.013095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.013126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.013358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.013392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.013624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.013656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.013824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.013855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.013970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.014001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.014104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.014136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.014340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.014373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.014481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.014512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.014680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.014711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.014955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.014986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.015185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.015229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.015341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.015372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.015504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.015608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.015639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.015754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.015786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.015969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.016000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.016175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.016217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.016388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.016420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.016531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.016562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.016677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.644 [2024-12-16 22:42:43.016708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.644 qpair failed and we were unable to recover it. 00:36:53.644 [2024-12-16 22:42:43.016873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.016905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.017070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.017101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.017212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.017251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.017421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.017452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.017639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.017670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.017778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.017810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.017983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.018015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.018120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.018152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.018337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.018370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.018482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.018514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.018679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.018710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.018812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.018844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.019039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.019070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.019202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.019235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.019341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.019373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.019616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.019648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.019852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.019883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.020076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.020108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.020282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.020315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.020427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.020458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.020625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.020657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.020756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.020787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.020958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.020989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.021108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.021139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.021248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.021281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.021545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.021576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.021687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.021719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.021832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.021863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.022058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.022089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.022244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.022283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.022491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.022523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.022790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.022822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.023057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.023088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.023266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.023299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.645 [2024-12-16 22:42:43.023416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.645 [2024-12-16 22:42:43.023449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.645 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.023554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.023585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.023819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.023850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.024027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.024059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.024226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.024260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.024377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.024407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.024515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.024546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.024655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.024686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.024875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.024906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.025070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.025102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.025353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.025387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.025493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.025524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.025747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.025779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.025963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.025995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.026255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.026287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.026396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.026429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.026621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.026652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.026768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.026799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.026987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.027020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.027188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.027230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.027327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.027359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.027526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.027557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.027751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.027789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.027981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.028012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.028176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.028218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.028323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.028355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.028543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.028574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.028699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.028734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.028857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.028890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.029058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.029090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.029186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.029230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.029419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.029450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.029645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.029678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.029847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.029879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.029984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.030014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.030150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.030182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.030338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.030370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.030490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.030522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.030642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.030674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.030847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.030878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.030996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.031028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.031147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.646 [2024-12-16 22:42:43.031178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.646 qpair failed and we were unable to recover it. 00:36:53.646 [2024-12-16 22:42:43.031354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.031386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.031493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.031524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.031689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.031720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.031817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.031849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.032022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.032053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.032161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.032202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.032415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.032620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.032860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.033114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.033219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.033252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.033426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.033458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.033653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.033684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.033874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.033906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.034074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.034106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.034211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.034245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.034357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.034389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.034491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.034523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.034622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.034654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.034922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.034954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.035189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.035231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.035422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24bc5e0 is same with the state(6) to be set 00:36:53.647 [2024-12-16 22:42:43.035695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.035765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.035897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.035932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.036048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.036083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.036251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.036287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.036521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.036554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.036689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.036721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.036831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.036864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.037048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.037079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.037304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.037337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.037448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.037480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.037647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.037680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.037781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.037812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.038011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.038043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.038221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.038255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.038433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.038464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.038631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.038663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.038776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.038808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.038911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.038944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.039141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.647 [2024-12-16 22:42:43.039267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.647 [2024-12-16 22:42:43.039299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.647 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.039407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.039439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.039605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.039637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.039743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.039774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.039874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.039905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.040000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.040031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.040208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.040241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.040342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.040381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.040568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.040599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.040760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.040791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.040982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.041013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.041190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.041234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.041417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.041448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.041624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.041655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.041765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.041796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.041987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.042019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.042214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.042247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.042507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.042539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.042650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.042681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.042792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.042823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.042939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.042970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.043079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.043111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.043226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.043260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.043429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.043460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.043611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.043712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.043743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.043847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.043879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.044088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.044119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.044234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.044268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.044433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.044463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.044648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.044680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.044797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.044828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.044931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.044962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.045131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.045162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.045287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.045319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.045487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.045518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.045762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.045794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.045895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.045926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.046030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.046061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.046168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.046220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.046392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.046423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.046589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.648 [2024-12-16 22:42:43.046620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.648 qpair failed and we were unable to recover it. 00:36:53.648 [2024-12-16 22:42:43.046725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.046757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.046868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.046900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.047071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.047102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.047226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.047259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.047368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.047399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.047513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.047550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.047666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.047697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.047871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.047903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.048110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.048142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.048341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.048375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.048477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.048509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.048669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.048700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.048910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.048941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.049185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.049225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.049341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.049374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.049488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.049520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.049687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.049718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.049893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.049924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.050034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.050065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.050247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.050281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.050451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.050482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.050586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.050618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.050783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.050813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.050997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.051030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.051207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.051239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.051424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.051456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.051624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.051655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.051756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.051786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.051968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.052000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.052097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.052129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.052305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.052338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.052508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.052540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.052719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.052751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.052867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.053004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.053037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.053210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.053243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.053361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.053393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.053571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.053603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.053810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.053841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.053958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.053990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.054229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.649 [2024-12-16 22:42:43.054262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.649 qpair failed and we were unable to recover it. 00:36:53.649 [2024-12-16 22:42:43.054377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.054408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.054588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.054620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.054804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.054836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.055008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.055040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.055146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.055182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.055309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.055341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.055488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.055519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.055739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.055770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.055938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.055971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.056151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.056182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.056304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.056337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.056503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.056535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.056704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.056736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.056917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.056949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.057084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.057284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.057320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.057468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.057500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.057631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.057663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.057808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.057840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.057957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.057989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.058100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.058131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.058261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.058294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.058509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.058612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.058644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.058831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.058862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.058965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.058997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.059140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.059344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.059377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.059488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.059520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.059686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.059718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.059911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.059942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.060083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.060205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.060239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.060340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.060372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.060475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.650 [2024-12-16 22:42:43.060506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.650 qpair failed and we were unable to recover it. 00:36:53.650 [2024-12-16 22:42:43.060617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.060649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.060835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.060867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.061001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.061033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.061234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.061267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.061382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.061416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.061529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.061561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.061676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.061706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.061890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.061922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.062027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.062059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.062160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.062380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.062412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.062525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.062556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.062676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.062708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.062940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.062971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.063081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.063112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.063221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.063255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.063427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.063459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.063635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.063667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.063777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.063809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.063917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.063948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.064161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.064201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.064312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.064343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.064511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.064544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.064654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.064685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.064893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.064925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.065115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.065147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.065258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.065291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.065457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.065489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.065657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.065689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.065798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.065830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.066018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.066050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.066430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.066462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.066645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.066676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.066794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.066826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.066998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.067031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.067289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.067363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.067561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.067631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.067817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.067854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.068035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.068067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.651 [2024-12-16 22:42:43.068245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.651 [2024-12-16 22:42:43.068280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.651 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.068498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.068530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.068699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.068731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.068839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.068869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.069041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.069073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.069289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.069324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.069519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.069551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.069705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.069874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.069906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.070009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.070041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.070222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.070257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.070423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.070456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.070625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.070657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.070838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.070986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.071019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.071185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.071229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.071401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.071432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.071530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.071561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.071798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.071830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.072049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.072081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.072204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.072239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.072507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.072672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.072703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.072883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.072920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.073035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.073066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.073235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.073269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.073386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.073418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.073519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.073550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.073720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.073752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.073926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.073957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.074129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.074161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.074299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.074332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.074504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.074535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.074749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.074781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.075036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.075067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.075234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.075268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.075435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.075466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.075660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.075862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.076074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.076105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.076221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.076255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.076427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.076457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.652 qpair failed and we were unable to recover it. 00:36:53.652 [2024-12-16 22:42:43.076624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.652 [2024-12-16 22:42:43.076656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.076772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.076803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.076924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.076956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.077063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.077095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.077263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.077296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.077484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.077515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.077709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.077740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.077914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.077945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.078123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.078155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.078269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.078302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.078405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.078436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.078554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.078586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.078687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.078719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.078894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.078925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.079043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.079075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.079281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.079384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.079418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.079529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.079560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.079674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.079706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.079919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.080134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.080165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.080372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.080410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.080649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.080681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.080850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.080881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.081046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.081079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.081273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.081307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.081477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.081507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.081747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.081779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.081891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.081923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.082099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.082132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.082327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.082360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.082529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.082561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.082747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.082779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.082955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.082986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.083228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.083262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.083442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.083474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.083645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.083677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.083846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.083877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.084070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.084102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.084214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.084247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.653 qpair failed and we were unable to recover it. 00:36:53.653 [2024-12-16 22:42:43.084366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.653 [2024-12-16 22:42:43.084398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.084573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.084605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.084718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.084749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.084907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.084940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.085108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.085139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.085264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.085298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.085404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.085436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.085606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.085639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.085806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.085838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.085947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.085979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.086144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.086176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.086287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.086319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.086510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.086542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.086707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.086739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.086974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.087010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.087208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.087242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.087434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.087466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.087703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.087736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.087848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.087880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.088052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.088084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.088205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.088247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.088423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.088454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.088556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.088588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.088755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.088786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.088956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.088988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.089175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.089219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.089389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.089420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.089625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.089657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.089841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.089873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.090070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.090101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.090249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.090283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.090408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.090440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.090657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.090689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.090855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.090887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.091078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.654 [2024-12-16 22:42:43.091111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.654 qpair failed and we were unable to recover it. 00:36:53.654 [2024-12-16 22:42:43.091376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.091408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.091587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.091619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.091740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.091773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.091941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.091972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.092071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.092103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.092303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.092337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.092451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.092482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.092581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.092613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.092795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.092827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.092975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.093006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.093108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.093140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.093283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.093317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.093434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.093467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.093652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.093684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.093787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.093819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.093989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.094021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.094213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.094247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.094417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.094450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.094550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.094582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.094764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.094795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.094955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.094987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.095177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.095229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.095334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.095366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.095535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.095567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.095742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.095774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.095942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.095980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.096165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.096206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.096373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.096405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.096517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.096548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.096664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.096697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.096872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.096903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.097096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.097128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.097234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.097267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.097380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.097411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.097574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.097606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.097707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.097739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.097851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.097882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.098068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.098100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.098225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.098259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.098370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.098402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.655 qpair failed and we were unable to recover it. 00:36:53.655 [2024-12-16 22:42:43.098596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.655 [2024-12-16 22:42:43.098627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.098731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.098763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.099013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.099044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.099234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.099267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.099433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.099466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.099579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.099611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.099787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.099818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.099986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.100018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.100237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.100270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.100438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.100470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.100573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.100605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.100845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.100876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.101138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.101170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.101310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.101342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.101445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.101477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.101637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.101669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.101782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.101813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.102013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.102045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.102244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.102277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.102451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.102483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.102682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.102713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.102895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.102926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.103027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.103059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.103171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.103330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.103362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.103468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.103505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.103762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.103793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.103961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.103992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.104161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.104201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.104384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.104417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.104586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.104617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.104854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.104886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.105077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.105109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.105230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.105262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.105421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.105453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.105619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.105651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.105824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.105855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.106094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.106126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.106229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.106263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.106379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.106411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.106522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.106554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.656 [2024-12-16 22:42:43.106657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.656 [2024-12-16 22:42:43.106688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.656 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.106799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.106830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.106947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.106979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.107087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.107117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.107301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.107334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.107507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.107539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.107734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.107766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.107884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.107915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.108089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.108121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.108222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.108255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.108423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.108454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.108573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.108605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.108741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.108772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.108962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.108994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.109092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.109124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.109297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.109331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.109531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.109700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.109732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.109966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.109997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.110106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.110138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.110348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.110381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.110497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.110529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.110638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.110669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.110839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.110870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.111041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.111079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.111270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.111303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.111473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.111504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.111623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.111655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.111846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.111879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.111979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.112011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.112114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.112145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.112265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.112298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.112494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.112526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.112717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.112749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.112937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.112969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.113087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.113283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.113316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.113498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.113529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.113702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.113734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.113934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.113966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.114133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.657 [2024-12-16 22:42:43.114164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.657 qpair failed and we were unable to recover it. 00:36:53.657 [2024-12-16 22:42:43.114344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.114377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.114557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.114834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.114866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.115333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.115366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.115486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.115518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.115703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.115735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.115905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.115937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.116054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.116085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.116204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.116236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.116436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.116469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.116663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.116695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.116816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.116847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.116997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.117030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.117150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.117182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.117423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.117455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.117736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.117935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.117967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.118134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.118166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.118338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.118448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.118628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.118659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.118853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.118884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.119063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.119100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.119367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.119400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.119517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.119549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.119754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.119786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.119974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.120005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.120175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.120215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.120384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.120416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.120650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.120681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.120792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.120823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.120994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.121025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.121212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.121245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.121409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.658 [2024-12-16 22:42:43.121441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.658 qpair failed and we were unable to recover it. 00:36:53.658 [2024-12-16 22:42:43.121610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.121642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.121820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.121851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.122042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.122074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.122246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.122279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.122448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.122480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.122713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.122745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.122913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.122944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.123177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.123219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.123326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.123359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.123597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.123630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.123813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.123845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.123950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.123982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.124172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.124215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.124383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.124415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.124584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.124616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.124742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.124773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.124946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.124978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.125087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.125118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.125244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.125278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.125452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.125484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.125652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.125685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.125800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.125832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.126104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.126135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.126249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.126281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.126405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.126435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.126548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.126579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.126750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.126782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.126953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.126984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.127120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.127158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.127299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.127332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.127455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.127488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.127606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.127638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.127741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.127774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.127891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.127923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.128159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.128200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.128379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.128413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.128599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.128632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.128848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.128879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.129091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.129123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.129238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.129272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.129445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.659 [2024-12-16 22:42:43.129477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.659 qpair failed and we were unable to recover it. 00:36:53.659 [2024-12-16 22:42:43.129576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.129615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.129843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.129876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.129983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.130015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.130135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.130166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.130353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.130385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.130493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.130525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.130694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.130724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.130897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.130929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.131135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.131167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.131279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.131311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.131476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.131508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.131678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.131709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.131890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.131922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.132039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.132296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.132369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.132510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.132546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.132654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.132686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.132799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.132830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.132934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.132965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.133133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.133165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.133297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.133498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.133530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.133688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.133720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.133840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.133871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.133989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.134187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.134345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.134472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.134627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.134771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.134905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.134937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.135042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.135074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.135268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.135301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.135405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.135437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.135576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.135692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.135725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.135896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.135928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.136190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.136236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.136414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.136446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.136624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.136656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.136764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.136795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.660 [2024-12-16 22:42:43.136911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.660 [2024-12-16 22:42:43.136943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.660 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.137055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.137087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.137214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.137246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.137356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.137387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.137511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.137542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.137781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.137813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.137924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.137955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.138073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.138105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.138238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.138271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.138443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.138475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.138594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.138625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.138730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.138761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.138870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.138902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.139121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.139209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.139410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.139446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.139564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.139596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.139710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.139742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.139844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.139875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.139983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.140015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.140120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.140150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.140265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.140298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.140476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.140508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.140611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.140644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.140816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.140847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.141064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.141096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.141347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.141604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.141644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.141839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.141871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.141985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.142116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.142327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.142474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.142673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.142812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.142943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.142974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.143080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.143112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.143232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.143265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.143424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.143455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.143560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.143591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.143696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.143726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.144000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.661 [2024-12-16 22:42:43.144031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.661 qpair failed and we were unable to recover it. 00:36:53.661 [2024-12-16 22:42:43.144225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.144258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.144452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.144483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.144583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.144615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.144800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.144831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.144933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.144964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.145081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.145112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.145210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.145243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.145454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.145486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.145659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.145689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.145789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.145821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.145984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.146015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.146181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.146225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.146399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.146432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.146552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.146583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.146686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.146716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.146889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.146920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.147025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.147056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.147171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.147210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.147325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.147356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.147524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.147556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.147735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.147765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.147866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.147897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.148062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.148094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.148388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.148506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.148538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.148656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.148693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.149007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.149038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.149230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.149263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.149383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.149414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.149582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.149613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.149815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.149847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.149950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.149982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.150091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.150122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.150290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.150323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.150520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.150552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.150653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.150683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.150899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.150931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.662 qpair failed and we were unable to recover it. 00:36:53.662 [2024-12-16 22:42:43.151098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.662 [2024-12-16 22:42:43.151129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.151276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.151308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.151414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.151445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.151637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.151669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.151884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.151915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.152014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.152045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.152214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.152248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.152509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.152540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.152642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.152672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.152832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.152863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.153049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.153080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.153253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.153286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.153410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.153454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.153630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.153661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.153885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.153955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.154083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.154118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.154236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.154272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.154377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.154408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.154646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.154677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.154790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.154821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.154988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.155020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.155206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.155237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.155431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.155463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.155567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.155598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.155699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.155730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.155833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.155864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.156031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.156063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.156207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.156248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.156369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.156400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.156505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.156537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.156727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.156758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.156932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.156964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.157067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.157098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.157215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.157248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.157361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.157393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.157650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.157683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.157782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.157813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.663 [2024-12-16 22:42:43.157944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.663 [2024-12-16 22:42:43.157975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.663 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.158163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.158207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.158463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.158496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.158597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.158628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.158819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.158851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.159041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.159072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.159240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.159273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.159382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.159413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.159595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.159626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.159726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.159758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.159925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.159956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.160117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.160413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.160446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.160614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.160645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.160789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.160956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.160987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.161162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.161201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.161323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.161355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.161470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.161502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.161599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.161630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.161740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.161772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.161890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.161921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.162082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.162112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.162284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.162317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.162481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.162513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.162755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.162786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.162881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.162912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.163026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.163058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.163243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.163275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.163447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.163479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.163669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.163705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.163807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.163838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.164002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.164033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.164143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.164174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.164347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.164380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.164544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.164576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.164771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.164802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.164933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.165094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.165126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.165245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.165278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.165464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.165496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.165664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.165695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.165805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.165836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.165947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.165978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.166085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.166117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.166219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.664 [2024-12-16 22:42:43.166252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.664 qpair failed and we were unable to recover it. 00:36:53.664 [2024-12-16 22:42:43.166414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.166445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.166611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.166642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.166807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.166838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.167020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.167052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.167343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.167585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.167616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.167776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.167807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.167981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.168012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.168205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.168238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.168410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.168441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.168681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.168712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.168884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.168916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.169020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.169051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.169234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.169267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.169457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.169489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.169603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.169634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.169871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.169903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.170143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.170174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.170373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.170406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.170510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.170542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.170730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.170761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.170873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.171172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.171215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.171334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.171366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.171465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.171497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.171695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.171726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.171893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.171925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.172104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.172134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.172373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.172406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.172580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.172612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.172728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.172759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.173018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.173049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.173212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.173243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.173350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.173381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.173496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.173527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.173640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.173671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.173882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.173915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.174035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.174066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.174259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.174292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.174408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.174439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.174559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.174590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.174748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.174779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.174881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.174913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.175108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.175140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.175262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.175294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.175538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.665 [2024-12-16 22:42:43.175570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.665 qpair failed and we were unable to recover it. 00:36:53.665 [2024-12-16 22:42:43.175761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.175793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.175906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.175937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.176037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.176068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.176305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.176338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.176553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.176583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.176740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.176777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.176946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.176977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.177082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.177242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.177275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.177377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.177408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.177502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.177533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.177699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.177730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.177904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.177934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.178049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.178081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.178252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.178284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.178398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.178430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.178642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.178673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.178963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.178994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.179159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.179199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.179372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.179404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.179572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.179604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.179813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.179844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.179946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.179978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.180202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.180234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.180339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.180370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.180475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.180505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.180681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.180713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.180830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.180860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.181074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.181105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.181228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.181261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.181378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.181410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.181581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.181612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.181783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.181815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.181992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.182023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.182214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.182246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.182372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.182403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.182602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.182633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.182802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.182833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.183008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.183040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.183139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.183170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.666 qpair failed and we were unable to recover it. 00:36:53.666 [2024-12-16 22:42:43.183347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.666 [2024-12-16 22:42:43.183379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.183480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.183511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.183687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.183718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.183896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.183926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.184122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.184153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.184330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.184368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.184470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.184501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.184612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.184643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.184810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.184842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.185075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.185106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.185364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.185397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.185568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.185599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.185767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.185799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.185988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.186019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.186165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.186209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.186318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.186349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.186517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.186548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.186750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.186781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.187048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.187083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.187260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.187293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.187581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.187612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.187866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.187898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.188092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.188123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.188247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.188280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.188383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.188415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.188587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.188618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.188845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.188877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.189964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.189996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.190107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.190139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.190277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.190311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.190423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.190454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.190665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.190697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.190869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.190901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.191017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.191048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.191153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.191185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.191365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.191397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.191499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.191530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.191632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.191663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.191828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.191860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.192025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.667 [2024-12-16 22:42:43.192067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.667 qpair failed and we were unable to recover it. 00:36:53.667 [2024-12-16 22:42:43.192252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.192286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.192390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.192422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.192605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.192637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.192738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.192769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.192866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.192897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.193096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.193286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.193320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.193427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.193459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.193711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.193743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.193905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.193936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.194117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.194148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.194326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.194358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.194455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.194486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.194595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.194626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.194814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.194846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.194964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.194996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.195095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.195126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.195233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.195265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.195456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.195487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.195584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.195616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.195726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.195758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.195990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.196021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.196313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.196347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.196446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.196478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.196740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.196771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.196938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.196969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.197150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.197183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.197397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.197429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.197595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.197627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.197792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.197824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.197936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.197967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.198148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.198179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.198448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.198481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.198650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.198681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.198784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.198816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.199032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.199064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.199229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.199262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.199379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.199411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.199583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.199615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.199783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.199820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.199991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.200022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.200231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.200265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.200455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.200487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.200603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.200634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.200746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.200778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.200880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.668 [2024-12-16 22:42:43.200911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.668 qpair failed and we were unable to recover it. 00:36:53.668 [2024-12-16 22:42:43.201032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.201063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.201236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.201269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.201505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.201536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.201721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.201753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.201922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.201954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.202139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.202170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.202310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.202342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.202608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.202640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.202790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.202908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.202939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.203104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.203136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.203312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.203346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.203463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.203494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.203660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.203692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.203858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.203890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.204060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.204091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.204213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.204246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.204362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.204394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.204573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.204604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.204775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.204806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.204919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.204952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.205120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.205151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.205397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.205430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.205597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.205628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.205798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.205829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.205938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.205969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.206162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.206199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.206403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.206435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.206537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.206569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.206672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.206703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.206820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.206852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.207024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.207055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.207176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.207216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.207406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.207443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.207551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.207582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.207696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.207727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.207845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.207877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.208043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.208074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.208268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.208301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.208404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.208435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.208628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.208660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.208763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.208794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.208904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.208935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.209039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.209071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.209238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.209270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.669 [2024-12-16 22:42:43.209387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.669 [2024-12-16 22:42:43.209418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.669 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.209601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.209633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.209808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.209840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.210098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.210130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.210295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.210328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.210490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.210522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.210689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.210721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.210911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.210943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.211122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.211155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.211283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.211315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.211431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.211462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.211573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.211605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.211796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.211827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.211942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.211973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.212086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.212118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.212252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.212285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.212451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.212482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.212585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.212616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.212801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.212833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.213000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.213031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.213204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.213237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.213402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.213434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.213653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.213684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.213874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.213905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.214117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.214149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.214274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.214567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.214598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.214788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.214819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.215003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.215040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.215213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.215245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.215483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.215515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.215731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.215916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.215947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.216114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.216146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.216343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.216376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.216544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.216576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.216758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.216789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.217047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.217078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.217243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.217276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.217376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.217408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.670 [2024-12-16 22:42:43.217592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.670 [2024-12-16 22:42:43.217624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.670 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.217791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.217822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.218012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.218043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.218141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.218172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.218372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.218404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.218591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.218623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.218738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.218770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.218957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.218988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.219105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.219330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.219363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.219532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.219564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.219732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.219763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.220022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.220053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.220170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.220210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.220383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.220414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.220521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.220553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.220734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.220766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.221028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.221059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.221235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.221269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.221530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.221562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.221728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.221759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.221879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.222096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.222127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.222309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.222342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.222513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.222545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.222689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.222861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.222892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.223128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.223160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.223345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.223384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.223553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.223585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.223772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.223804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.223971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.224003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.224239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.224272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.224398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.224430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.224612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.224644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.224760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.224792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.224963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.224995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.225206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.225240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.225434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.225465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.225577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.225608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.225710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.225742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.225921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.225953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.226060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.226092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.226357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.226390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.226564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.226595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.671 qpair failed and we were unable to recover it. 00:36:53.671 [2024-12-16 22:42:43.226833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.671 [2024-12-16 22:42:43.226865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.226987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.227019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.227210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.227243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.227355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.227386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.227490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.227521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.227640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.227672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.227848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.227880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.227979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.228011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.228125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.228157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.228280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.228313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.228600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.228671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.228871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.228908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.229087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.229120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.229321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.229356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.229468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.229498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.229599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.229630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.229826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.229858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.230024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.230055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.230221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.230255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.230441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.230472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.230710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.230742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.230921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.230952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.231054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.231086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.231187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.231240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.231357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.231389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.231651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.231683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.231809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.231840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.232014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.232045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.232166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.232208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.232314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.232345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.232547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.232784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.232815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.233032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.233175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.233394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.233540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.233701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.233873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.233989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.234020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.234139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.234170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.234297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.234329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.234556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.234587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.234778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.234810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.234977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.672 [2024-12-16 22:42:43.235008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.672 qpair failed and we were unable to recover it. 00:36:53.672 [2024-12-16 22:42:43.235119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.235151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.235276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.235308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.235478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.235508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.235702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.235734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.235838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.235870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.235975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.236006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.236316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.236388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.236523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.236559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.236673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.236713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.236972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.237005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.237109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.237141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.237339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.237372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.237500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.237531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.237633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.237665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.237867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.237900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.238070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.238102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.238237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.238271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.238487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.238657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.238689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.238804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.238844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.238949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.238981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.239147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.239178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.239307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.239338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.239453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.239484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.239587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.239618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.239722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.239753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.239946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.239980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.240145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.240175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.240296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.240328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.240503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.240535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.240722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.240755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.240875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.240907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.241015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.241047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.241235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.241270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.241379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.241412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.241581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.241612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.241797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.241830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.242002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.242034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.242215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.242249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.242423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.242455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.242573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.242612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.242730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.242762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.242888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.242920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.243024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.243056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.243215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.243247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.673 [2024-12-16 22:42:43.243414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.673 [2024-12-16 22:42:43.243446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.673 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.243669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.243739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.243939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.243974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.244123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.244332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.244365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.244471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.244506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.244677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.244708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.244815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.244846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.245018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.245050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.245160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.245204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.245380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.245411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.245524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.245556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.245660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.245690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.245952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.245983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.246155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.246187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.246339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.246465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.246496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.246677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.246708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.246913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.246944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.247065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.247097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.247291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.247325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.247428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.247456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.247697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.247728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.247877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.248085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.248286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.248427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.248574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.248727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.248863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.248968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.249126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.249286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.249433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.249566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.249782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.249919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.249950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.250055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.250085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.250253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.250286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.250391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.250421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.250533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.674 [2024-12-16 22:42:43.250564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.674 qpair failed and we were unable to recover it. 00:36:53.674 [2024-12-16 22:42:43.250673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.250704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.250833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.250865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.251030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.251061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.251229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.251261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.251369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.251398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.251500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.251530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.251629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.251658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.251848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.251879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.252066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.252212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.252249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.252353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.252384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.252488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.252520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.252623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.252654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.252890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.252921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.253104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.253142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.253257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.253289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.253393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.253424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.253601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.253632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.253816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.253848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.254045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.254180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.254332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.254465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.254663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.254870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.254975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.255006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.255111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.255142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.255352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.255462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.255494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.255789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.255896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.255927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.256100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.256131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.256354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.256387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.256499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.256530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.256702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.256843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.256873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.256990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.257021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.257129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.257160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.257308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.257349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.257530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.257564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.257797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.257829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.257938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.257976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.258088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.258120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.258221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.258256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.258444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.258477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.258588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.258619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.675 qpair failed and we were unable to recover it. 00:36:53.675 [2024-12-16 22:42:43.258735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.675 [2024-12-16 22:42:43.258767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.258937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.258969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.259075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.259107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.259301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.259335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.259453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.259484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.259589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.259622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.259739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.259771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.259893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.259924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.260041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.260074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.260184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.260227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.260395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.260428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.260595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.260627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.260730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.260761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.260873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.260905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.261076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.261106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.261281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.261314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.261486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.261524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.261634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.261665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.261840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.261872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.262054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.262086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.262252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.262287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.262402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.262433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.262542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.262579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.262777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.262809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.262922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.262953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.263058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.263090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.263202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.263234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.263339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.263370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.263472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.263503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.263671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.263702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.263906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.263937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.264045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.264077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.264177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.264217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.264328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.264360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.264536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.264566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.264734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.264766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.264942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.264973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.265076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.265108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.265366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.265398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.265505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.265536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.265653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.265686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.265857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.265887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.266168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.266209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.266422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.266453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.266621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.266652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.266755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.266786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.676 qpair failed and we were unable to recover it. 00:36:53.676 [2024-12-16 22:42:43.266975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.676 [2024-12-16 22:42:43.267007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.267175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.267217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.267408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.267439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.267608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.267644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.267765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.267797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.267912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.267942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.268129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.268160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.268430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.268464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.268586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.268618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.268738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.268769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.268951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.268982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.269084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.269115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.269287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.269320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.269489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.269521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.269640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.269671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.269774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.269804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.269922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.269955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.270128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.270159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.270275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.270308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.270479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.270510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.270675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.270706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.270873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.270903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.271073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.271104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.271208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.271241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.271408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.271440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.271617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.271648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.271826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.271857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.272025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.272056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.272155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.272186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.272303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.272335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.272437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.272474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.272774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.272805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.273011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.273041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.273214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.273248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.273362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.273393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.273497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.273528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.273715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.273746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.273914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.273946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.274112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.274143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.274259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.274292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.274410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.274440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.274704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.274736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.274904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.274935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.275037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.275067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.275244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.275278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.275458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.275489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.677 [2024-12-16 22:42:43.275680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.677 [2024-12-16 22:42:43.275712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.677 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.275971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.276002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.276110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.276141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.276393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.276426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.276687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.276718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.276894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.276925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.277053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.277085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.277207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.277240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.277419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.277451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.277563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.277594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.277709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.277740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.277853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.277895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.278010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.278042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.278233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.278267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.278439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.278469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.278634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.278665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.278833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.278864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.278973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.279003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.279104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.279135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.279396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.279429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.279685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.279716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.279832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.279863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.280034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.280065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.280238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.280270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.280441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.280473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.280634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.280702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.280835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.280877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.281136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.281169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.281291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.281469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.281570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.281602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.281771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.281803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.281912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.281945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.282113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.282145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.282345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.282378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.282567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.282599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.282705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.282737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.282912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.282944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.283059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.283097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.283273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.283308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.283426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.283458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.283694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.678 [2024-12-16 22:42:43.283725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.678 qpair failed and we were unable to recover it. 00:36:53.678 [2024-12-16 22:42:43.283890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.283920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.284032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.284063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.284229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.284262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.284450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.284483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.284662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.284693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.284809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.284839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.284950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.284981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.285165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.285204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.285372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.285404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.285576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.285608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.285732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.285764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.285868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.285899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.286008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.286047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.286163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.286201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.286384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.286415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.286522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.286554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.286724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.286763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.286872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.286903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.287070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.287104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.287214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.287245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.287624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.287654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.287895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.287927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.288044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.288085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.288263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.288298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.288472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.288504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.288745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.288776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.288969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.289000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.289112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.289143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.289347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.289381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.289492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.289523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.289725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.289757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.289857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.289886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.289986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.290326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.290473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.290750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.290918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.290950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.291178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.291219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.291338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.291370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.291487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.291517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.291635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.291667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.291950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.291981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.292098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.679 [2024-12-16 22:42:43.292129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.679 qpair failed and we were unable to recover it. 00:36:53.679 [2024-12-16 22:42:43.292252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.292286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.292391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.292423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.292664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.292695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.292895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.292926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.293966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.293998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.294102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.294133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.294331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.294364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.294481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.294513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.294702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.294734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.294930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.294962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.295068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.295099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.295436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.295506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.295722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.295761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.295971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.296166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.296376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.296512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.296656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.296801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.296945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.296977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.297235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.297267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.297380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.297411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.297526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.297558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.297675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.297706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.297886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.297918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.298039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.298070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.298173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.298243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.298419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.298450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.298628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.298659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.298761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.298792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.298908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.298940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.299127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.299157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.299395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.299429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.299668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.299699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.299802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.299833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.300007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.300038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.300220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.300253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.300437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.300468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.300585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.300621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.300732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.680 [2024-12-16 22:42:43.300764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.680 qpair failed and we were unable to recover it. 00:36:53.680 [2024-12-16 22:42:43.300931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.300963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.301074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.301105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.301299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.301333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.301453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.301483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.301657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.301690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.301891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.301922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.302099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.302130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.302295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.302327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.302534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.302567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.302757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.302788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.303096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.303128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.303392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.303425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.303541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.303573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.303671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.303703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.303880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.303911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.304093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.304124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.304365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.304398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.304585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.304615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.304729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.304761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.304863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.304894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.305965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.305996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.306094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.306125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.306306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.306339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.306512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.306541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.306660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.306862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.306893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.307062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.307092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.307185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.307226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.307341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.307373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.307543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.307574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.307704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.307735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.307938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.307969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.308156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.308186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.308370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.308402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.308588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.308620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.308799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.308830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.309065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.309096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.309296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.309328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.309541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.681 [2024-12-16 22:42:43.309572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.681 qpair failed and we were unable to recover it. 00:36:53.681 [2024-12-16 22:42:43.309831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.309863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.309966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.309998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.310170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.310209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.310311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.310342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.310534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.310565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.310733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.310764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.310939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.310970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.311141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.311178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.311314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.311345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.311446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.311494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.311611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.311642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.311754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.311785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.311958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.311990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.312213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.312319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.312350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.312516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.312547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.312659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.312691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.312795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.312825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.313019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.313050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.313161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.313202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.313325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.313357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.313530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.313561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.313729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.313761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.314020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.314051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.314285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.314317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.314490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.314521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.314688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.314720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.314824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.314853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.315022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.315054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.315292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.315326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.315500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.682 [2024-12-16 22:42:43.315532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.682 qpair failed and we were unable to recover it. 00:36:53.682 [2024-12-16 22:42:43.315633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.315665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.315852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.315883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.315985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.316017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.316270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.316304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.316415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.316447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.316607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.316792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.316823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.316992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.317024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.317136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.317166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.317363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.317395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.317495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.317525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.683 [2024-12-16 22:42:43.317690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.683 [2024-12-16 22:42:43.317720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.683 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.317825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.317957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.317989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.318185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.318228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.318340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.318372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.318589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.318620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.318869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.318941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.319214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.319252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.319374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.319407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.319575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.319608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.319854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.319887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.320063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.320094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.320213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.320259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.320451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.320483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.320592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.320622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.320741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.320772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.320879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.320911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.321144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.321176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.321312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.321344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.321455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.321494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.321691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.321724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.321829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.321860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.321958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.321990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.322102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.322132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.322242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.322274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.322396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.969 [2024-12-16 22:42:43.322426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.969 qpair failed and we were unable to recover it. 00:36:53.969 [2024-12-16 22:42:43.322526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.322557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.322658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.322691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.322871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.322902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.323072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.323103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.323270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.323304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.323493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.323525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.323624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.323656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.323775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.323806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.323995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.324028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.324202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.324235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.324339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.324370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.324483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.324515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.324630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.324660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.324781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.324812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.324997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.325030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.325212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.325244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.325362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.325394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.325510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.325543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.325644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.325676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.325785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.325818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.325979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.326049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.326175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.326231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.326341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.326374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.326564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.326596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.326769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.326800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.327004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.327173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.327216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.327333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.327365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.327531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.327561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.327661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.327694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.327796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.327827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.328010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.328041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.328245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.328279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.328464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.328496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.328607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.328638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.328804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.328836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.328939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.328971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.329179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.329222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.970 [2024-12-16 22:42:43.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.970 [2024-12-16 22:42:43.329422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.970 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.329524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.329556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.329656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.329687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.329798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.329830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.329997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.330029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.330295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.330327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.330442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.330473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.330660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.330692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.330862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.330892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.331069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.331101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.331226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.331259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.331376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.331407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.331573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.331605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.331897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.331928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.332102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.332133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.332363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.332396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.332663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.332694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.332921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.332953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.333071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.333102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.333342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.333376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.333610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.333642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.333852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.333883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.333988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.334025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.334204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.334237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.334355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.334386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.334584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.334615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.334851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.334883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.335049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.335081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.335252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.335284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.335401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.335432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.335543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.335575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.335710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.335880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.335912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.336079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.336110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.336215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.336249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.336355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.336386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.971 [2024-12-16 22:42:43.336505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.971 [2024-12-16 22:42:43.336536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.971 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.336643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.336674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.336779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.336811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.336912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.336943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.337110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.337142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.337275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.337307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.337410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.337442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.337712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.337743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.337862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.337894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.337999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.338030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.338207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.338241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.338408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.338440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.338632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.338663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.338949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.338980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.339085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.339116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.339235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.339269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.339387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.339418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.339534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.339565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.339671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.339702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.339868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.339899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.340069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.340099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.340290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.340324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.340561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.340593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.340773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.340804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.340975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.341005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.341265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.341298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.341405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.341441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.341614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.341646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.341748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.341999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.342031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.342216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.342249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.342365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.342397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.342504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.342535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.342721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.342753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.342870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.342901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.343118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.343149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.343265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.343298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.343538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.343569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.972 [2024-12-16 22:42:43.343734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.972 [2024-12-16 22:42:43.343765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.972 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.343937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.343970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.344089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.344120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.344289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.344323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.344491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.344522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.344733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.344765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.344936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.344968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.345077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.345109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.345294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.345326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.345425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.345458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.345579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.345610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.345846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.345878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.345980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.346113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.346332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.346538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.346673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.346815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.346956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.346988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.347181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.347223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.347337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.347368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.347535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.347566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.347733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.347765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.347957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.347987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.348287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.348321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.348497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.348527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.348707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.348738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.348849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.348879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.348986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.349137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.349166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.349281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.349315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.349508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.973 [2024-12-16 22:42:43.349538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.973 qpair failed and we were unable to recover it. 00:36:53.973 [2024-12-16 22:42:43.349776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.349807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.349922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.349950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.350078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.350110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.350278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.350310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.350492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.350522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.350692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.350724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.350839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.350871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.351061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.351091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.351263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.351296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.351467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.351499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.351605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.351637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.351750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.351780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.351900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.351932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.352046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.352076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.352281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.352314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.352505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.352536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.352707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.352739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.352840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.352871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.352973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.353004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.353126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.353157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.353285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.353316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.353425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.353457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.353651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.353685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.353876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.353906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.354024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.354054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.354166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.354206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.354444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.354476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.354589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.354619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.354887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.355020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.355051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.355220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.355253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.355449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.355480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.355672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.355703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.355811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.355842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.355956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.355987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.356180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.356241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.974 [2024-12-16 22:42:43.356357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-12-16 22:42:43.356394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.974 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.356506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.356537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.356706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.356736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.356991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.357021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.357202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.357234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.357399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.357432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.357632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.357663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.357844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.357876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.358007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.358039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.358214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.358247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.358420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.358452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.358556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.358586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.358870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.358902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.359086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.359117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.359221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.359251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.359485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.359517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.359701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.359732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.359836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.359866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.359968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.359999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.360175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.360216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.360385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.360416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.360583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.360613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.360788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.360819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.360989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.361021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.361200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.361232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.361411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.361443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.361614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.361645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.361762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.361793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.361898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.361928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.362099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.362129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.362300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.362334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.362503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.362535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.362636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.362770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.362802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.362970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.363002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.363104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.363134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.363243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.363276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.975 qpair failed and we were unable to recover it. 00:36:53.975 [2024-12-16 22:42:43.363384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-12-16 22:42:43.363414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.363518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.363549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.363714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.363743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.363851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.363887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.363993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.364025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.364232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.364264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.364514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.364546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.364730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.364760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.364878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.364909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.365104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.365136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.365260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.365292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.365412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.365442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.365622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.365653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.365821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.365853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.366092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.366124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.366380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.366413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.366519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.366549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.366725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.366756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.366925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.366956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.367058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.367087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.367471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.367501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.367691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.367723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.367908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.367940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.368106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.368137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.368247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.368278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.368545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.368576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.368678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.368709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.368810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.368842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.369088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.369369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.369401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.369508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.369538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.369705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.369736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.369836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.369867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.370059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.370090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.370259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.370292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.370398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.370429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.976 [2024-12-16 22:42:43.370634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.976 qpair failed and we were unable to recover it. 00:36:53.976 [2024-12-16 22:42:43.370799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.370829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.371073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.371312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.371344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.371522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.371553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.371787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.371817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.372024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.372061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.372173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.372237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.372437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.372467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.372637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.372669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.372771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.372803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.372920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.372950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.373067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.373098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.373292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.373325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.373490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.373520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.373686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.373716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.373845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.373877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.374114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.374146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.374271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.374306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.374474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.374504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.374684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.374715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.374824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.374855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.374956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.374986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.375097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.375128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.375363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.375397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.375564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.375595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.375814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.375845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.376043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.376073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.376264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.376297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.376494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.376525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.376710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.376741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.376844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.376874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.376978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.377008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.377161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.377295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.377326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.377517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.377548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.977 qpair failed and we were unable to recover it. 00:36:53.977 [2024-12-16 22:42:43.377820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.977 [2024-12-16 22:42:43.377852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.377972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.378004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.378201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.378233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.378427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.378457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.378622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.378652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.378764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.378795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.378904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.378935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.379129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.379161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.379353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.379384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.379496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.379526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.379705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.379742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.379986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.380016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.380223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.380263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.380375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.380405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.380588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.380618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.380781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.380813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.380985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.381016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.381214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.381247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.381438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.381470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.381579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.381609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.381773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.381804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.381931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.381960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.382151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.382181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.382355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.382385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.382490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.382522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.382690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.382720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.382955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.382985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.383158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.383188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.383390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.383423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.383529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.383560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.383749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.383894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.383925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.384037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.384069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.384237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.978 [2024-12-16 22:42:43.384268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.978 qpair failed and we were unable to recover it. 00:36:53.978 [2024-12-16 22:42:43.384389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.384419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.384550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.384582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.384773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.384803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.385083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.385201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.385232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.385415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.385446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.385625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.385657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.385824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.385856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.386026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.386056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.386283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.386386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.386417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.386533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.386564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.386811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.386841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.387078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.387108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.387278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.387310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.387429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.387460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.387702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.387745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.387985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.388016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.388186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.388225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.388347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.388377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.388488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.388518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.388685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.388717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.388820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.388849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.389026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.389057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.389175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.389214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.389349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.389380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.389547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.389579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.389758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.389789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.389898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.389928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.390101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.390132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.390250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.390289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.390400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.390430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.390599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.390630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.390798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.390829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.390947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.390977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.391145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.391174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.391445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.979 [2024-12-16 22:42:43.391477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.979 qpair failed and we were unable to recover it. 00:36:53.979 [2024-12-16 22:42:43.391649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.391680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.391852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.391883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.392069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.392100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.392314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.392348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.392448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.392478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.392659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.392690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.393005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.393075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.393218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.393257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.393435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.393468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.393762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.393793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.393968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.393999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.394203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.394236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.394442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.394473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.394714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.394746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.394862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.394893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.394996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.395027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.395207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.395239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.395448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.395479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.395647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.395678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.395845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.395876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.395988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.396020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.396119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.396149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.396327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.396359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.396527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.396558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.396752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.396783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.397022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.397052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.397180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.397224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.397344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.397374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.397542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.397573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.397808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.397838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.397956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.397987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.398157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.398188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.398403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.398434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.398604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.398640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.398879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.398911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.399102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.399132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.399244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.399277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.399457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.399488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.399605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.980 [2024-12-16 22:42:43.399637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.980 qpair failed and we were unable to recover it. 00:36:53.980 [2024-12-16 22:42:43.399823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.399854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.400038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.400070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.400338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.400371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.400539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.400570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.400807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.400839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.400952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.400983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.401224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.401256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.401444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.401475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.401652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.401683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.401793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.401824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.402041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.402072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.402242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.402275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.402401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.402432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.402623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.402654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.402765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.402797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.402909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.402941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.403207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.403239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.403357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.403388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.403489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.403521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.403629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.403660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.403828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.403860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.403981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.404017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.404125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.404156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.404353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.404386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.404580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.404611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.404751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.405004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.405036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.405213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.405245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.405366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.405398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.405659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.405691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.405928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.405959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.406125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.406156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.406281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.406314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.406501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.406531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.406695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.406727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.406908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.406940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.407068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.981 [2024-12-16 22:42:43.407098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.981 qpair failed and we were unable to recover it. 00:36:53.981 [2024-12-16 22:42:43.407211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.407244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.407429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.407461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.407568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.407599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.407798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.407828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.407994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.408025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.408126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.408157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.408391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.408501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.408532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.408655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.408686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.408858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.408889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.409055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.409086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.409228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.409266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.409438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.409469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.409640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.409672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.409834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.409866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.409970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.410001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.410119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.410151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.410263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.410295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.410494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.410525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.410718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.410749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.410940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.410971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.411073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.411113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.411241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.411273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.411449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.411480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.411580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.411611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.411878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.411908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.412004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.412034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.412146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.412178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.412359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.412390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.412650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.412681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.412876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.412907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.413106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.413136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.413314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.413351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.413563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.413595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.413779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.413810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.413974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.414005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.414101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.414132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.414329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.982 [2024-12-16 22:42:43.414362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.982 qpair failed and we were unable to recover it. 00:36:53.982 [2024-12-16 22:42:43.414475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.414506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.414649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.414680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.414848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.414878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.415115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.415146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.415257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.415287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.415458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.415488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.415655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.415687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.415790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.415822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.415996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.416026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.416135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.416166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.416390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.416421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.416536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.416568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.416826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.416857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.416957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.416988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.417105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.417137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.417312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.417345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.417536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.417566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.417782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.417960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.417990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.418157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.418187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.418303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.418334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.418433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.418464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.418633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.418663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.418827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.418858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.418956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.418986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.419245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.419278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.419377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.419409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.419521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.419553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.419660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.419872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.419903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.420068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.420099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.420296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.420328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.420528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.983 qpair failed and we were unable to recover it. 00:36:53.983 [2024-12-16 22:42:43.420664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.983 [2024-12-16 22:42:43.420694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.420813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.420844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.421015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.421046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.421253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.421285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.421405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.421435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.421599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.421630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.421739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.421771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.421889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.421920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.422019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.422055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.422170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.422208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.422391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.422422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.422605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.422637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.422812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.422843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.423101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.423133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.423326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.423359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.423623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.423654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.423847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.423878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.424160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.424211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.424397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.424428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.424628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.424658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.424896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.424927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.425109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.425140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.425352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.425385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.425575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.425605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.425707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.425737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.425849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.425880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.425978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.426007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.426173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.426215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.426476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.426507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.426618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.426648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.426837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.426867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.427035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.427066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.427205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.427237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.427420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.427450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.427617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.427647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.427840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.427875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.428043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.428075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.984 qpair failed and we were unable to recover it. 00:36:53.984 [2024-12-16 22:42:43.428243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.984 [2024-12-16 22:42:43.428275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.428467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.428498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.428670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.428701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.428885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.428915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.429189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.429235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.429425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.429457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.429620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.429652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.429845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.429877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.430072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.430104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.430217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.430250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.430356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.430387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.430552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.430584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.430847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.430878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.431056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.431324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.431358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.431527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.431558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.431741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.431771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.431871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.431903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.432162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.432227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.432404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.432435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.432675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.432706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.432873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.432905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.433004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.433036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.433302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.433335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.433438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.433470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.433678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.433715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.433882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.433913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.434145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.434177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.434360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.434392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.434496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.434527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.434704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.434735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.435016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.435047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.435217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.435248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.435367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.435398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.435502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.435533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.435653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.435685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.435803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.435833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.436001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.436033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.436149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.985 [2024-12-16 22:42:43.436180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.985 qpair failed and we were unable to recover it. 00:36:53.985 [2024-12-16 22:42:43.436319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.436352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.436459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.436491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.436600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.436632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.436807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.436838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.437026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.437057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.437245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.437278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.437536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.437567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.437690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.437722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.437912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.437942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.438061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.438093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.438204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.438237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.438405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.438435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.438536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.438567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.438735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.438767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.438942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.438973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.439088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.439119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.439226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.439258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.439362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.439394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.439569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.439601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.439775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.439806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.439914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.439945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.440135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.440166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.440298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.440331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.440527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.440558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.440793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.440824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.440937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.441152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.441183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.441299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.441336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.441504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.441535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.441649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.441679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.441796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.441828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.441932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.441964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.442072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.442102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.442223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.442263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.442367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.442399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.442641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.442672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.442775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.442806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.443042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.986 [2024-12-16 22:42:43.443074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.986 qpair failed and we were unable to recover it. 00:36:53.986 [2024-12-16 22:42:43.443239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.443272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.443403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.443434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.443623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.443654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.443777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.443808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.443918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.443949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.444050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.444081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.444215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.444247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.444520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.444552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.444745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.444776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.444960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.444991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.445100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.445132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.445242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.445274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.445438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.445469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.445706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.445738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.445998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.446029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.446204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.446236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.446355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.446397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.446492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.446524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.446690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.446721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.446905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.446937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.447103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.447134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.447332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.447365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.447528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.447560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.447743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.447773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.447974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.448006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.448178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.448232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.448403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.448434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.448531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.448563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.448820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.448851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.449040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.449071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.449267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.449300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.449473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.449505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.449692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.449724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.449839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.987 [2024-12-16 22:42:43.449870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.987 qpair failed and we were unable to recover it. 00:36:53.987 [2024-12-16 22:42:43.450062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.450094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.450215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.450247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.450391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.450422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.450588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.450620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.450786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.450818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.450982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.451012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.451177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.451216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.451382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.451414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.451518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.451713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.451749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.451952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.451984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.452243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.452275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.452462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.452494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.452592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.452623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.452731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.452762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.452855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.452886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.453050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.453081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.453255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.453288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.453457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.453488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.453676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.453707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.453948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.453981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.454148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.454180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.454379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.454411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.454678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.454710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.454899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.454929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.455189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.455229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.455440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.455471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.455599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.455781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.455813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.455940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.455970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.456148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.456179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.456385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.456417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.456517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.456548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.456653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.456683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.456859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.456890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.457054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.457085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.457345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.457379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.988 [2024-12-16 22:42:43.457554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.988 [2024-12-16 22:42:43.457585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.988 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.457704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.457736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.457901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.457932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.458219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.458252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.458361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.458392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.458570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.458602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.458707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.458738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.458907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.458938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.459037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.459068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.459262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.459294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.459396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.459428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.459594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.459626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.459885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.459917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.460085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.460117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.460284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.460317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.460417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.460448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.460636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.460667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.460914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.460945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.461152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.461183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.461362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.461395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.461505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.461535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.461647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.461678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.461917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.461948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.462120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.462152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.462371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.462584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.462615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.462781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.462812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.462917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.462948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.463117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.463149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.463418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.463450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.463586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.463617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.463729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.463760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.463875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.463906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.464107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.464311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.464344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.464514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.464546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.464735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.464767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.464967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.464998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.989 qpair failed and we were unable to recover it. 00:36:53.989 [2024-12-16 22:42:43.465185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.989 [2024-12-16 22:42:43.465226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.465463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.465495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.465754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.465791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.465975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.466007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.466109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.466140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.466408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.466440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.466648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.466679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.466788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.466819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.466999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.467030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.467143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.467173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.467352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.467384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.467569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.467601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.467727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.467758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.467944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.467975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.468084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.468115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.468236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.468270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.468448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.468480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.468674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.468705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.468967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.468999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.469117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.469148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.469272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.469304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.469498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.469529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.469723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.469754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.469924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.469955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.470131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.470162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.470351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.470383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.470622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.470653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.470844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.470876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.471042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.471074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.471177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.471221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.471438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.471471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.471594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.471626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.471727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.471759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.471875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.471908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.472039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.990 [2024-12-16 22:42:43.472070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.990 qpair failed and we were unable to recover it. 00:36:53.990 [2024-12-16 22:42:43.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.472276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.472523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.472556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.472656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.472687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.472854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.472888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.473124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.473159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.473280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.473314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.473416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.473448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.473616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.473650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.473899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.473934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.474131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.474164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.474353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.474387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.474559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.474592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.474793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.474826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.474993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.475026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.475261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.475482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.475515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.475621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.475771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.475803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.475933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.475964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.476133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.476165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.476296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.476330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.476435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.476474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.476586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.476619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.476743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.476776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.476947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.476980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.477171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.477215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.477342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.477376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.477637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.477670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.477785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.477935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.477968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.478088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.478121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.478227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.478261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.478363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.478396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.478512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.478545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.478675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.478707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.479019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.479090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.479296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.991 [2024-12-16 22:42:43.479336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.991 qpair failed and we were unable to recover it. 00:36:53.991 [2024-12-16 22:42:43.479471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.479504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.479625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.479659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.479763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.479795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.479971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.480004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.480125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.480157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.480351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.480385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.480577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.480610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.480734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.480767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.480942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.480974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.481089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.481120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.481291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.481323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.481426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.481472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.481579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.481610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.481724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.481755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.481923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.481956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.482364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.482396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.482516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.482547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.482655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.482686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.482791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.482822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.482987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.483018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.483255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.483289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.483409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.483439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.483611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.483642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.483914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.483945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.484079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.484111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.484281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.484313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.484415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.484443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.484623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.484654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.484751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.484781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.484965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.484996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.485093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.485123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.992 [2024-12-16 22:42:43.485291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.992 [2024-12-16 22:42:43.485326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.992 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.485504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.485534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.485703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.485734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.485903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.485932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.486114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.486145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.486355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.486388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.486548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.486619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.486867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.486904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.487018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.487051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.487253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.487289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.487481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.487513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.487721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.487751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.487924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.487956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.488067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.488100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.488329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.488500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.488531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.488647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.488678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.488829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.488931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.488963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.489139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.489180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.489322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.489354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.489545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.489576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.489754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.489786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.489962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.489993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.490105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.490137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.490314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.490347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.490516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.490547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.490716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.490747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.490942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.490973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.491139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.491170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.491303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.491337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.491574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.491605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.491835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.491866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.492068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.492100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.492270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.492304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.492484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.492516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.492802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.492833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.492997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.493105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.993 [2024-12-16 22:42:43.493136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.993 qpair failed and we were unable to recover it. 00:36:53.993 [2024-12-16 22:42:43.493249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.493282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.493400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.493431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.493548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.493579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.493748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.493782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.493896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.493927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.494041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.494073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.494174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.494215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.494373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.494443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.494639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.494676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.494796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.494830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.495004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.495036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.495137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.495170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.495307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.495339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.495507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.495538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.495705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.495738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.495919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.495952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.496055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.496086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.496205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.496238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.496352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.496383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.496558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.496589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.496869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.496910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.497015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.497047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.497158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.497201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.497317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.497348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.497540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.497572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.497681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.497713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.497819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.497849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.498017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.498049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.498291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.498325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.498434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.498467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.498672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.498703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.498870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.498901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.499084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.499116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.499225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.499259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.499418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.499610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.499642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.499901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.499933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.994 [2024-12-16 22:42:43.500113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.994 [2024-12-16 22:42:43.500145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.994 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.500411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.500443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.500569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.500600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.500725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.500756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.500927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.500958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.501145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.501176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.501373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.501510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.501541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.501737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.501768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.501971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.502001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.502220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.502255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.502366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.502399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.502513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.502544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.502724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.502756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.502946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.502978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.503081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.503113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.503283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.503315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.503490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.503521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.503623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.503656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.503765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.503796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.503966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.503998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.504113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.504145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.504259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.504290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.504457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.504495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.504669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.504701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.504938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.504970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.505158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.505208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.505450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.505482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.505590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.505621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.505791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.505822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.505994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.506027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.506265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.506299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.506473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.506505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.506615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.506646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.506764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.506796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.506965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.506998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.507101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.507131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.507312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.507346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.507446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.507477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.507642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.995 [2024-12-16 22:42:43.507673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.995 qpair failed and we were unable to recover it. 00:36:53.995 [2024-12-16 22:42:43.507831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.507864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.507969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.508000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.508190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.508235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.508344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.508377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.508550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.508584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.508755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.508786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.508887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.508916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.509017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.509049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.509159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.509214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.509469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.509505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.509670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.509742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.509872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.509908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.510019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.510052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.510228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.510263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.510383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.510414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.510518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.510547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.510728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.510759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.510878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.510910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.511095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.511126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.511242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.511276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.511383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.511414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.511590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.511622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.511789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.511821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.511941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.511973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.512096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.512127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.512322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.512354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.512523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.512555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.512721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.512752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.512857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.512888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.513073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.513104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.513302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.513335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.513439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.513470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.513653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.513684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.513787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.513818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.514005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.514036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.996 [2024-12-16 22:42:43.514139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.996 [2024-12-16 22:42:43.514169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.996 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.514320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.514353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.514550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.514581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.514687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.514718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.514916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.514947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.515054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.515086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.515259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.515291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.515460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.515490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.515655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.515687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.515969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.516000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.516110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.516142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.516322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.516356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.516459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.516490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.516609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.516640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.516754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.516785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.516985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.517022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.517141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.517173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.517378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.517411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.517520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.517551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.517717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.517748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.517935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.517967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.518134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.518164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.518422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.518455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.518607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.518638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.518805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.518836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.519027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.519058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.519166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.519208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.519398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.519430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.519542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.519574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.519696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.519727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.519897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.519928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.520056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.520087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.520264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.520297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.520408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.520438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.520607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.997 [2024-12-16 22:42:43.520639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.997 qpair failed and we were unable to recover it. 00:36:53.997 [2024-12-16 22:42:43.520812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.520842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.520942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.520972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.521077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.521110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.521286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.521319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.521419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.521450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.521619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.521651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.521781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.522002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.522218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.522325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.522356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.522465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.522497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.522618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.522648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.522771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.522802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.522981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.523012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.523113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.523146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.523354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.523386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.523488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.523519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.523621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.523652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.523870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.523902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.524067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.524099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.524344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.524383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.524647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.524678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.524883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.524914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.525014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.525045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.525231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.525264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.525378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.525409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.525530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.525561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.525671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.525702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.525804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.525836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.526033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.526170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.526324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.526471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.526679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.526886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.526995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.527025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.527133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.527165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.527278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.527310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.998 qpair failed and we were unable to recover it. 00:36:53.998 [2024-12-16 22:42:43.527551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.998 [2024-12-16 22:42:43.527582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.527747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.527778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.527893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.527925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.528024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.528056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.528291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.528343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.528484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.528515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.528698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.528730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.528904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.528936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.529050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.529081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.529187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.529248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.529349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.529380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.529549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.529581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.529698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.529729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.529898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.529933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.530040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.530072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.530242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.530275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.530378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.530409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.530613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.530645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.530836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.530868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.530978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.531009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.531206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.531238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.531425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.531458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.531579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.531617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.531861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.531893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.531994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.532026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.532139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.532170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.532287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.532319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.532425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.532457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.532623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.532655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.532821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.532853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.532970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.533003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.533235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.533269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.533390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.533421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.533529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.533560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.533729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.533762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.533933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.533965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.534139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.534170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.534335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.534456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.534487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:53.999 [2024-12-16 22:42:43.534690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.999 [2024-12-16 22:42:43.534722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:53.999 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.534893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.534924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.535090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.535121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.535305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.535338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.535514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.535545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.535714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.535746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.535926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.535958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.536067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.536098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.536276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.536310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.536495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.536527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.536798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.536830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.537032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.537250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.537384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.537547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.537690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.537895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.537999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.538030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.538272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.538304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.538474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.538506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.538672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.538704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.538857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.538963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.538994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.539098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.539135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.539318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.539351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.539458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.539490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.539676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.539708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.539875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.539907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.540073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.540106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.540246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.540279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.540515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.540547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.540650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.540681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.540783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.540814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.541093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.541125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.541231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.541264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.541453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.000 [2024-12-16 22:42:43.541485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.000 qpair failed and we were unable to recover it. 00:36:54.000 [2024-12-16 22:42:43.541665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.541697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.541874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.541906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.542022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.542052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.542248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.542281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.542401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.542433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.542612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.542643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.542759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.542791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.543074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.543105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.543280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.543312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.543415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.543447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.543634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.543666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.543842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.543873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.544128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.544159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.544303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.544337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.544528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.544559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.544659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.544690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.544818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.544850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.545016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.545054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.545220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.545253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.545441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.545473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.545689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.545721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.545913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.545944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.546134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.546166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.546285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.546318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.546486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.546517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.546700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.546732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.546860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.546891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.547011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.547047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.547155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.547186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.547390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.547422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.001 [2024-12-16 22:42:43.547530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.001 [2024-12-16 22:42:43.547561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.001 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.547672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.547703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.547821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.547854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.548023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.548054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.548227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.548258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.548432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.548463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.548638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.548910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.548942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.549108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.549139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.549251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.549282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.549531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.549562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.549672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.549704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.549902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.549933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.550035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.550066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.550253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.550286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.550399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.550430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.550545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.550575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.550691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.550723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.550907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.550940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.551045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.551076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.551175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.551216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.551390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.551423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.551616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.551648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.551829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.551860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.551980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.552012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.552131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.552163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.552401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.552471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.552669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.552705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.552947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.552978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.553103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.553135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.553270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.553305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.553418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.553449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.553557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.553588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.553692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.553724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.553935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.553969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.554138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.554170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.554374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.554407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.554577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.554618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.554792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.554824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.002 qpair failed and we were unable to recover it. 00:36:54.002 [2024-12-16 22:42:43.554994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.002 [2024-12-16 22:42:43.555025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.555132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.555161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.555336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.555372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.555481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.555512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.555680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.555712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.555882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.555913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.556030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.556061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.556235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.556268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.556368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.556400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.556519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.556550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.556740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.556772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.557034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.557066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.557244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.557276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.557398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.557430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.557544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.557576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.557778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.557809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.558010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.558042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.558143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.558173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.558361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.558393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.558569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.558600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.558706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.558738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.558918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.558950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.559069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.559100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.559300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.559334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.559440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.559470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.559648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.559681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.559851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.559882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.560000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.560032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.560145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.560177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.560326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.560358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.560532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.560564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.560763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.560932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.560963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.561072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.561103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.561275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.561308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.561425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.561457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.561627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.561662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.561765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.561794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.561894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.003 [2024-12-16 22:42:43.561930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.003 qpair failed and we were unable to recover it. 00:36:54.003 [2024-12-16 22:42:43.562035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.562065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.562167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.562205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.562309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.562339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.562447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.562477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.562589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.562619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.562790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.562822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.562992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.563024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.563211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.563244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.563414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.563445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.563650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.563682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.563800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.563832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.564003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.564034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.564276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.564432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.564464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.564703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.564735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.564840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.564872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.565062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.565095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.565261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.565294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.565469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.565500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.565672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.565704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.565824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.565856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.566025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.566056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.566176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.566219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.566334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.566365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.566474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.566505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.566687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.566717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.566831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.566866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.567051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.567081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.567184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.567225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.567429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.567460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.567651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.567680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.567781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.567809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.568889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.004 [2024-12-16 22:42:43.568918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.004 qpair failed and we were unable to recover it. 00:36:54.004 [2024-12-16 22:42:43.569023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.569058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.569229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.569260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.569430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.569460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.569693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.569724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.569982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.570012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.570206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.570237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.570405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.570435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.570696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.570726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.570901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.570930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.571033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.571062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.571261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.571514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.571545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.571658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.571688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.571976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.572005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.572128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.572157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.572343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.572373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.572482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.572512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.572680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.572710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.572825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.572853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.573018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.573049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.573221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.573254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.573368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.573398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.573584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.573615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.573784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.573983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.574013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.574119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.574153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.574331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.574363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.574594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.574665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.574942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.574978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.575100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.575134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.575330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.575365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.575570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.575601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.575714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.575745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.575858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.575889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.575998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.576029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.576137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.576168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.576366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.576398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.576574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.005 [2024-12-16 22:42:43.576605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.005 qpair failed and we were unable to recover it. 00:36:54.005 [2024-12-16 22:42:43.576719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.576751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.577061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.577091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.577264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.577298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.577479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.577511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.577616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.577647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.577830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.577861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.577966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.577997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.578111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.578142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.578329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.578362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.578533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.578563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.578734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.578765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.578886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.578916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.579107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.579139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.579262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.579295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.579469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.579501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.579687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.579718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.579837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.579875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.579978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.580008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.580127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.580160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.580333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.580365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.580552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.580583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.580784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.580815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.580958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.581125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.581156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.581443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.581637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.581668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.581773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.581805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.581908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.581939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.582042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.582073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.582254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.582286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.582418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.582547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.582579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.582699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.582730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.582852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.006 [2024-12-16 22:42:43.582884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.006 qpair failed and we were unable to recover it. 00:36:54.006 [2024-12-16 22:42:43.583048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.583078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.583181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.583392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.583423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.583529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.583560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.583671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.583702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.583812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.583843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.584887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.584918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.585086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.585117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.585294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.585327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.585429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.585460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.585669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.585701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.585872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.585903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.586131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.586163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.586318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.586351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.586612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.586644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.586833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.586864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.587050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.587081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.587272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.587307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.587481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.587512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.587698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.587729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.587914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.587944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.588111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.588145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.588325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.588357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.588471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.588502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.588669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.588700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.588877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.588909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.589016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.589047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.589154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.589186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.589302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.589337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.589524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.589555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.589725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.589762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.589930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.589962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.590129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.007 [2024-12-16 22:42:43.590160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.007 qpair failed and we were unable to recover it. 00:36:54.007 [2024-12-16 22:42:43.590291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.590323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.590433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.590464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.590750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.590782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.590968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.590999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.591101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.591133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.591260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.591293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.591411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.591442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.591617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.591648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.591764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.591795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.591894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.591925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.592096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.592128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.592340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.592374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.592544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.592576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.592744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.592775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.592966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.592997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.593164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.593205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.593474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.593505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.593669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.593701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.593877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.593908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.594019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.594049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.594235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.594267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.594439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.594470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.594639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.594670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.594772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.594804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.594975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.595005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.595125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.595156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.595368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.595401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.595594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.595625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.595797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.595829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.596001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.596034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.596153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.596184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.596376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.596407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.596712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.596743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.596856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.596887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.596991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.597021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.597141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.597174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.597311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.597342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.597545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.597577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.597686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.008 [2024-12-16 22:42:43.597717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.008 qpair failed and we were unable to recover it. 00:36:54.008 [2024-12-16 22:42:43.597827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.597858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.598003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.598074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.598286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.598324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.598512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.598546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.598652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.598682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.598918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.598949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.599119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.599150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.599273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.599306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.599471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.599503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.599611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.599642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.599832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.599865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.599972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.600170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.600361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.600515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.600656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.600800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.600952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.600982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.601103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.601134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.601318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.601351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.601454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.601485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.601602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.601633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.601733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.601763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.601879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.601912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.602077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.602108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.602285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.602317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.602494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.602525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.602628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.602662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.602764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.602794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.602898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.602929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.603029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.603059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.603232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.603265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.603461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.603492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.603598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.603630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.603750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.603780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-16 22:42:43.603945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.009 [2024-12-16 22:42:43.603977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.604096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.604127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.604298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.604330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.604430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.604462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.604574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.604609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.604710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.604742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.604914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.604946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.605063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.605093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.605211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.605245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.605349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.605379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.605641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.605673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.605791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.605822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.605925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.605955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.606124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.606154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.606345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.606378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.606550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.606582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.606774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.606806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.606936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.606967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.607142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.607172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.607286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.607318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.607431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.607466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.607662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.607695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.607808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.607839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.607942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.607972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.608162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.608202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.608305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.608337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.608462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.608492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.608685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.608717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.608841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.608873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.608979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.609010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.609209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.609241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.609348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.609379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.609476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.609507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.609747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.609777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-16 22:42:43.609949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.010 [2024-12-16 22:42:43.609981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.610147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.610177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.610302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.610333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.610591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.610623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.610740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.610770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.610879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.610910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.611009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.611039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.611296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.611329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.611607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.611638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.611753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.611784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.611960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.612002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.612205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.612238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.612407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.612436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.612626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.612656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.612775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.612807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.612923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.612952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.613115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.613147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.613256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.613294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.613463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.613493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.613595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.613625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.613816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.613846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.613947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.613979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.614264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.614297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.614482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.614513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.614687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.614718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.614832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.614862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.614961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.614993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.615159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.615189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.615377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.615407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.615520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.615552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.615662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.615693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.615884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.615914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.616017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.616048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.616286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.616320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.616432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.616462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.616629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.616659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.616777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.616808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.616928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.616958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.617124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.617156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-16 22:42:43.617334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.011 [2024-12-16 22:42:43.617366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.617656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.617686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.617799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.617830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.617953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.618086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.618117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.618316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.618349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.618465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.618494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.618606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.618638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.618809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.618840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.618952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.618982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.619150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.619181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.619302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.619337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.619527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.619556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.619723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.619752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.619870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.619900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.619999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.620029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.620145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.620175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.620314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.620345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.620516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.620545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.620646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.620677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.620782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.620812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.620982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.621013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.621184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.621230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.621400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.621431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.621554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.621585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.621694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.621726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.621915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.622114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.622146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.622325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.622357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.622560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.622590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.622704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.622735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.622900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.622932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.623033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.623064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.623166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.623209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.623449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.623480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.623717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.623748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.623863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.623894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.624013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.624045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.624165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.624206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.624401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.624434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.624687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.012 [2024-12-16 22:42:43.624718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.012 qpair failed and we were unable to recover it. 00:36:54.012 [2024-12-16 22:42:43.624841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.624872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.624975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.625006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.625107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.625138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.625275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.625307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.625422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.625454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.625639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.625670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.625770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.625802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.625971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.626105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.626276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.626479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.626631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.626764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.626897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.626927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.627027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.627059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.627179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.627225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.627400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.627431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.627606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.627636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.627803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.627835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.628009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.628040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.628299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.628332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.628447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.628479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.628648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.628680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.628795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.628826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.629003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.629035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.629246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.629278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.629392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.629423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.629595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.629627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.629762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.629792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.629894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.629926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.630036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.630068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.630185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.630225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.630401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.630432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.630600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.630631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.630822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.630854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.631036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.631067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.631236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.631269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.631463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.631493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.631598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.631629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.631752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.631783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.631890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.631921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.013 [2024-12-16 22:42:43.632027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.013 [2024-12-16 22:42:43.632057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.013 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.632223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.632257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.632381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.632412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.632658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.632689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.632792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.632822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.633038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.633069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.633236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.633268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.633374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.633406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.633530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.633560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.633679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.633715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.633834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.633864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.634096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.634126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.634247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.634277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.634453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.634484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.634600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.634631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.634752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.634782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.634882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.634914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.635085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.635115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.635315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.635347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.635460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.635492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.635598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.635629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.635800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.635830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.635998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.636030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.636235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.636268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.636368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.636397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.636501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.636530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.636678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.636882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.636912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.637096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.637127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.637239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.637270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.637375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.637406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.637595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.637627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.637730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.637762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.637890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.637920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.638116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.638148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.638293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.638325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.638500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.638531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.638706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.638737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.638917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.638946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.639058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.639090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.639216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.639247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.639350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.639380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.639501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.639531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.639635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.014 [2024-12-16 22:42:43.639665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.014 qpair failed and we were unable to recover it. 00:36:54.014 [2024-12-16 22:42:43.639770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.639801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.639903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.639932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.640057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.640090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.640215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.640247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.640472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.640542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.640754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.640799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.640991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.641024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.641138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.641168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.641351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.641382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.641491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.641523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.641727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.641760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.641860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.641890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.642008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.642039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.642214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.642245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.642375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.642405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.642531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.642562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.642668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.642698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.642870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.643085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.643116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.643223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.643256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.643428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.643458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.643660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.643693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.643796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.643828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.643938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.643968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.644071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.644103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.644210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.644242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.015 [2024-12-16 22:42:43.644361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.015 [2024-12-16 22:42:43.644394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.015 qpair failed and we were unable to recover it. 00:36:54.016 [2024-12-16 22:42:43.644563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.016 [2024-12-16 22:42:43.644595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.644698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.644729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.644832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.644863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.644965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.644997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.645096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.645127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.645248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.645282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.645385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.645415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.645615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.645645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.645821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.645854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.645968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.646111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.646246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.646380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.646603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.646740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.646960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.646990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.647158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.647189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.647302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.647333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.647435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.647471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.647659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.647689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.302 [2024-12-16 22:42:43.647792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.302 [2024-12-16 22:42:43.647825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.302 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.647949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.647979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.648103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.648135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.648268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.648299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.648414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.648445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.648558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.648590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.648782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.648812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.649002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.649032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.649212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.649246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.649444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.649476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.649593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.649623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.649729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.649760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.649946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.649979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.650146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.650177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.650305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.650337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.650448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.650478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.650579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.650610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.650727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.650759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.650941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.650971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.651158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.651188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.651386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.651419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.651516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.651548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.651683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.651785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.651823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.651923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.651953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.652117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.652186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.652341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.652377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.652506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.652538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.652704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.652735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.652911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.652943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.653047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.653080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.653206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.653237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.653401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.653434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.653670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.653702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.653816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.653857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.654100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.654133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.654321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.654352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.654533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.654565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.654732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.303 [2024-12-16 22:42:43.654769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.303 qpair failed and we were unable to recover it. 00:36:54.303 [2024-12-16 22:42:43.654937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.654969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.655080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.655110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.655291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.655322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.655425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.655457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.655628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.655659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.655840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.655870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.656036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.656066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.656178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.656230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.656343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.656375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.656561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.656591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.656762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.656794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.656963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.656996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.657100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.657257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.657290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.657394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.657424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.657615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.657645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.657813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.658890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.658919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.659093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.659125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.659234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.659265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.659461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.659501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.659627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.659660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.659779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.659810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.659918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.659951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.660122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.660154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.660269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.660302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.660466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.660497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.660684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.660715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.660819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.660850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.660952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.660983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.661088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.661120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.661235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.661269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.661439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.304 [2024-12-16 22:42:43.661470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.304 qpair failed and we were unable to recover it. 00:36:54.304 [2024-12-16 22:42:43.661580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.661612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.661724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.661756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.661866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.661898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.662007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.662274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.662306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.662418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.662450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.662549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.662580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.662766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.662964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.662996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.663162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.663200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.663306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.663338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.663538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.663570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.663761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.663792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.663893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.663924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.664023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.664060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.664227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.664260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.664365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.664397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.664529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.664560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.664732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.664764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.664866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.664897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.665927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.666087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.666118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.666292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.666325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.666440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.666471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.666641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.666673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.666863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.666894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.305 [2024-12-16 22:42:43.667916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.305 [2024-12-16 22:42:43.667948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.305 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.668116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.668148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.668344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.668377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.668508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.668546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.668664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.668695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.668815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.668846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.668960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.669123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.669273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.669409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.669565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.669715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.669865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.669972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.670112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.670262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.670398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.670539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.670680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.670878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.670910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.671928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.671960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.672063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.672095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.672266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.672300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.672504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.672536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.672711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.672748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.672862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.673045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.673226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.673258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.306 [2024-12-16 22:42:43.673363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.306 [2024-12-16 22:42:43.673394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.306 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.673507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.673538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.673663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.673695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.673816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.673848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.674012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.674044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.674215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.674248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.674414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.674446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.674685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.674716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.674899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.674931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.675041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.675072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.675181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.675248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.675426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.675458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.675565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.675597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.675711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.675742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.675927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.675958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.676126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.676157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.676297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.676331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.676500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.676532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.676640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.676671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.676911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.676943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.677160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.677313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.677449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.677589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.677738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.677879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.677986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.678132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.678278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.678426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.678559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.678700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.678908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.678939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.679046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.679077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.679207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.679241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.679345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.679377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.679491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.679523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.679678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.679750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.679984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.307 [2024-12-16 22:42:43.680020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.307 qpair failed and we were unable to recover it. 00:36:54.307 [2024-12-16 22:42:43.680210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.680245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.680380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.680485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.680516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.680637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.680669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.680836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.680869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.680995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.681026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.681141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.681174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.681359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.681393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.681508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.681539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.681658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.681690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.681878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.681909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.682056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.682210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.682356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.682558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.682690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.682890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.682997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.683028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.683216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.683248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.683418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.683449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.683654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.683686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.683797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.683829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.683930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.683961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.684084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.684116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.684238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.684271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.684387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.684417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.684580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.684754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.684784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.684891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.684921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.685890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.685920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.686155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.686336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.686512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.308 [2024-12-16 22:42:43.686546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.308 qpair failed and we were unable to recover it. 00:36:54.308 [2024-12-16 22:42:43.686651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.686682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.686854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.686884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.687054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.687086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.687278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.687313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.687526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.687557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.687725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.687756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.687996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.688027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.688141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.688170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.688367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.688400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.688566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.688596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.688773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.688803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.688935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.688970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.689082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.689114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.689372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.689410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.689580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.689612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.689725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.689755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.689951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.689984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.690092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.690122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.690296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.690329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.690533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.690566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.690677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.690709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.690826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.690857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.690967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.690999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.691111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.691142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.691328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.691360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.691464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.691496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.691671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.691703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.691812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.691843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.692022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.692055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.692220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.692254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.692449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.692480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.692583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.692614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.692817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.692849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.692977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.693009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.693122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.693153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.693268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.693302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.693471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.693502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.693671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.693702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.693805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.309 [2024-12-16 22:42:43.693837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.309 qpair failed and we were unable to recover it. 00:36:54.309 [2024-12-16 22:42:43.693938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.693975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.694108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.694139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.694330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.694364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.694479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.694510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.694680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.694711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.694810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.694842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.694974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.695107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.695266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.695416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.695624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.695759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.695892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.695922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.696033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.696064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.696264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.696297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.696472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.696504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.696623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.696655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.696765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.696796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.696912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.696945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.697073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.697104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.697284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.697317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.697445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.697476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.697647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.697679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.697809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.697840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.697947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.697978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.698151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.698183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.698300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.698331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.698525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.698557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.698750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.698782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.698952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.698984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.699094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.699125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.699312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.699346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.699516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.699548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.699653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.699685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.699788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.699818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.700027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.700060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.700164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.700216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.700328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.700358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.700465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.310 [2024-12-16 22:42:43.700498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.310 qpair failed and we were unable to recover it. 00:36:54.310 [2024-12-16 22:42:43.700617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.700648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.700817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.700860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.700966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.700997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.701185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.701228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.701339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.701371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.701469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.701500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.701690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.701721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.701823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.701854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.702091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.702124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.702341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.702514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.702545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.702662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.702693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.702799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.702831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.702997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.703028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.703200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.703233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.703348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.703382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.703503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.703534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.703774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.703805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.703987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.704018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.704155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.704398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.704428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.704552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.704649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.704677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.704840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.704868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.704996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.705123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.705329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.705457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.705607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.705821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.705944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.705971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.706067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.706094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.706212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.706243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.706354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.311 [2024-12-16 22:42:43.706381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.311 qpair failed and we were unable to recover it. 00:36:54.311 [2024-12-16 22:42:43.706549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.706577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.706674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.706703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.706885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.706912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.707021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.707050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.707234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.707265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.707379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.707407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.707575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.707604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.707723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.707757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.707931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.707960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.708126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.708154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.708287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.708317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.708511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.708543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.708661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.708692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.708860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.708891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.708994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.709026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.709313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.709543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.709572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.709690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.709718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.709813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.709841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.709950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.709979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.710079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.710106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.710272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.710303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.710423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.710451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.710561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.710591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.710757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.710785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.710902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.710930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.711967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.711995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.712129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.712392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.712525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.712648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.312 [2024-12-16 22:42:43.712849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.312 qpair failed and we were unable to recover it. 00:36:54.312 [2024-12-16 22:42:43.712944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.712972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.713068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.713097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.713280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.713310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.713414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.713442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.713550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.713578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.713749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.713777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.713938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.713967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.714939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.714968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.715063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.715091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.715278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.715309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.715471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.715500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.715602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.715629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.715812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.715843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.716016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.716048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.716284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.716317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.716517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.716548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.716654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.716686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.716861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.716891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.716993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.717022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.717182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.717220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.717426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.717454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.717570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.717598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.717779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.717812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.717928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.717959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.718068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.718097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.718213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.718242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.718421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.718449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.718628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.718657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.718821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.718850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.719054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.719083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.719202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.719231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.719330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.719357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.719555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.313 [2024-12-16 22:42:43.719584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.313 qpair failed and we were unable to recover it. 00:36:54.313 [2024-12-16 22:42:43.719695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.719723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.719834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.719863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.720923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.720950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.721088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.721231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.721447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.721573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.721710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.721902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.721996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.722117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.722250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.722374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.722512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.722705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.722896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.722925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.723035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.723063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.723173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.723211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.723318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.723347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.723460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.723487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.723595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.723624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.723730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.723757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.724932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.724959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.725144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.725173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.725288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.725316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.725477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.725506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.725610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.314 [2024-12-16 22:42:43.725638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.314 qpair failed and we were unable to recover it. 00:36:54.314 [2024-12-16 22:42:43.725736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.725763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.725893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.725923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.726089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.726121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.726223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.726254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.726416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.726449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.726613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.726642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.726747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.726777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.726957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.726987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.727165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.727208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.727333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.727370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.727483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.727517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.727627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.727655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.727835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.727864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.727968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.727999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.728136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.728280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.728423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.728618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.728752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.728877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.728976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.729201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.729339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.729465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.729670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.729799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.729927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.729956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.730926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.730954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.731049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.731077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.315 qpair failed and we were unable to recover it. 00:36:54.315 [2024-12-16 22:42:43.731173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.315 [2024-12-16 22:42:43.731211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.731383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.731412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.731523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.731552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.731653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.731681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.731782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.731811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.731908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.731935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.732893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.732921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.733084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.733111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.733212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.733240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.733408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.733445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.733609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.733801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.733831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.733926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.733953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.734062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.734092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.734292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.734322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.734462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.734488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.734594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.734621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.734715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.734739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.734923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.734950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.735959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.735984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.316 qpair failed and we were unable to recover it. 00:36:54.316 [2024-12-16 22:42:43.736970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.316 [2024-12-16 22:42:43.736995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.737157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.737185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.737291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.737316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.737473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.737543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.737708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.737778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.738915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.738940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.739884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.739908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.740949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.740975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.741068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.741093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.741199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.741225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.741330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.741358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.741610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.741636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.741767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.741868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.741896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.742069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.742097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.742257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.742285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.742396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.742521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.742546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.742655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.742681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.742850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.742878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.743041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.317 [2024-12-16 22:42:43.743067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.317 qpair failed and we were unable to recover it. 00:36:54.317 [2024-12-16 22:42:43.743187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.743222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.743382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.743410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.743570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.743597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.743698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.743722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.743938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.743978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.744095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.744129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.744315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.744347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.744452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.744484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.744586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.744618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.744729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.744760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.744862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.744894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.745072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.745103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.745344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.745378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.745486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.745517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.745753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.745785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.745890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.745921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.746121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.746153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.746298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.746340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.746455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.746621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.746652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.746763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.746795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.746975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.747204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.747345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.747482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.747635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.747781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.747926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.748062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.748217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.748250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.748419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.748451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.748625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.748656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.748841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.748873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.749091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.749236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.749378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.749513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.749665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.749797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.318 qpair failed and we were unable to recover it. 00:36:54.318 [2024-12-16 22:42:43.749969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.318 [2024-12-16 22:42:43.750001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.750107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.750139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.750321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.750354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.750589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.750621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.750725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.750756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.750873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.750905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.751910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.751941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.752110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.752142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.752262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.752295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.752469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.752501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.752606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.752637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.752736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.752767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.752877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.752914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.753181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.753309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.753341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.753513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.753545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.753654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.753685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.753800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.753832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.754005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.754037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.754212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.754245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.754415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.754446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.754562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.754594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.754705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.754736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.754838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.754870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.755038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.755070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.755270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.755303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.755413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.755444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.755616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.755649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.755757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.755789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.755903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.755935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.756103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.756136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.756250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.756283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.756459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.756491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.319 qpair failed and we were unable to recover it. 00:36:54.319 [2024-12-16 22:42:43.756659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.319 [2024-12-16 22:42:43.756691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.756883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.756914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.757205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.757237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.757346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.757377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.757486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.757518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.757743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.757960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.758032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.758330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.758439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.758471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.758647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.758679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.758794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.758826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.759003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.759035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.759210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.759243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.759351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.759383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.759496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.759528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.759699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.759731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.759835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.759867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.760082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.760118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.760238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.760270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.760382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.760429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.760668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.760700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.760866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.760897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.761086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.761118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.761356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.761390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.761509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.761540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.761710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.761848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.761879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.762141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.762172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.762299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.762333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.762510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.762543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.762740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.762773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.762964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.762996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.763104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.763136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.763330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.763539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.320 [2024-12-16 22:42:43.763571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.320 qpair failed and we were unable to recover it. 00:36:54.320 [2024-12-16 22:42:43.763758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.763791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.763906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.763938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.764101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.764132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.764325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.764359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.764458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.764490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.764663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.764694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.764799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.764831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.765024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.765056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.765256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.765290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.765408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.765441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.765550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.765582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.765757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.765791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.765893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.765925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.766027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.766060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.766235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.766269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.766378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.766410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.766574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.766606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.766794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.766826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.766994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.767026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.767204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.767237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.767345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.767376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.767544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.767576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.767828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.767860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.767981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.768012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.768180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.768227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.768404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.768437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.768639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.768670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.768779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.768811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.769008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.769039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.769250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.769284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.769386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.769418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.769544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.769576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.769688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.769720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.769895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.769927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.770101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.770133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.770310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.770342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.770527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.770559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.770662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.770694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.770868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.321 [2024-12-16 22:42:43.770900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.321 qpair failed and we were unable to recover it. 00:36:54.321 [2024-12-16 22:42:43.771082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.771114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.771222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.771255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.771425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.771456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.771567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.771600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.771710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.771742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.771837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.771869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.772061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.772092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.772267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.772300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.772471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.772504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.772760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.772791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.772999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.773031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.773144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.773177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.773378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.773411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.773644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.773676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.773783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.773815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.773926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.773958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.774155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.774187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.774310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.774342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.774437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.774469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.774568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.774601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.774769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.774801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.775052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.775083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.775214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.775248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.775377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.775409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.775579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.775610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.775713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.775745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.775938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.775970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.776077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.776108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.776222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.776255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.776448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.776481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.776582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.776614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.776807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.776838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.777008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.777041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.777157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.777189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.777314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.777346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.777511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.777543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.777833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.777865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.778044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.778075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.778329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.322 [2024-12-16 22:42:43.778362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.322 qpair failed and we were unable to recover it. 00:36:54.322 [2024-12-16 22:42:43.778479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.778512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.778688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.778720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.778898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.778930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.779061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.779092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.779283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.779316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.779431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.779464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.779669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.779702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.779870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.779901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.780039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.780214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.780364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.780395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.780571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.780603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.780770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.780803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.780995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.781032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.781153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.781185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.781367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.781399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.781567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.781599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.781698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.781730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.781853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.781886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.781997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.782029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.782135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.782166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.782324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.782357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.782549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.782580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.782683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.782715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.782816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.782848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.783016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.783157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.783189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.783308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.783340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.783545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.783576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.783682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.783714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.783884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.783915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.784016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.784048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.784156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.784187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.784367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.784399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.784536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.784749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.784780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.784954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.784986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.785153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.785185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.785442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.323 [2024-12-16 22:42:43.785475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.323 qpair failed and we were unable to recover it. 00:36:54.323 [2024-12-16 22:42:43.785594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.785626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.785748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.785779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.786013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.786232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.786266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.786434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.786632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.786664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.786848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.786880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.786989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.787020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.787189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.787231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.787334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.787366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.787608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.787640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.787745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.787777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.787967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.787999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.788098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.788130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.788308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.788346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.788464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.788497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.788600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.788632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.788752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.788783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.788956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.788987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.789105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.789137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.789265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.789298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.789485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.789516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.789616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.789649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.789768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.789799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.789985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.790016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.790203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.790236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.790407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.790438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.790557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.790589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.790724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.790756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.790922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.790954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.791063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.791094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.791271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.791304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.791481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.791515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.791634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.791665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.791769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.324 [2024-12-16 22:42:43.791800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.324 qpair failed and we were unable to recover it. 00:36:54.324 [2024-12-16 22:42:43.791903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.791935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.792056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.792087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.792255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.792288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.792461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.792493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.792657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.792688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.792861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.792892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.793005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.793037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.793141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.793173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.793382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.793415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.793536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.793568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.793802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.793834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.793935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.793966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.794132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.794164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.794345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.794377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.794550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.794581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.794746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.794777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.794950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.794981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.795177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.795218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.795323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.795355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.795563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.795715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.795975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.796117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.796328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.796474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.796622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.796819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.796966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.796997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.797118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.797150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.797349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.797381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.797566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.797599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.797792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.797824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.797954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.797985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.798166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.798209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.798323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.798354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.798459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.798490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.798596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.798626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.798749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.798781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.798953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.325 [2024-12-16 22:42:43.798986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.325 qpair failed and we were unable to recover it. 00:36:54.325 [2024-12-16 22:42:43.799094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.799125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.799246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.799278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.799424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.799528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.799559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.799680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.799712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.799825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.799857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.800131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.800162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.800316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.800349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.800485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.800517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.800712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.800743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.800853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.800884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.800996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.801028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.801226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.801260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.801379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.801411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.801592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.801624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.801819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.801850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.802020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.802051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.802159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.802219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.802328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.802360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.802525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.802556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.802674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.802711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.802888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.802920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.803055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.803274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.803407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.803541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.803680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.803880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.803982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.804013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.804114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.804146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.804419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.804451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.804622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.804654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.804833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.804865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.805039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.805071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.805184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.805247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.805420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.805452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.805641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.805673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.805773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.805805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.805909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.326 [2024-12-16 22:42:43.805941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.326 qpair failed and we were unable to recover it. 00:36:54.326 [2024-12-16 22:42:43.806108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.806139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.806274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.806307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.806402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.806435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.806604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.806636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.806800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.806910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.806942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.807136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.807167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.807453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.807486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.807611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.807644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.807842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.807873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.808045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.808078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.808184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.808224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.808391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.808423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.808590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.808622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.808739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.808770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.808882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.808914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.809231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.809263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.809455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.809486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.809608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.809640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.809828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.809859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.810115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.810147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.810262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.810300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.810470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.810502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.810746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.810777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.810942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.810975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.811081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.811113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.811356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.811388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.811577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.811608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.811846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.812051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.812083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.812183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.812224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.812426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.812533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.812565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.812811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.812843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.812975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.813006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.813129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.813161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.813285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.813317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.813560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.813592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.813830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.327 [2024-12-16 22:42:43.813862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.327 qpair failed and we were unable to recover it. 00:36:54.327 [2024-12-16 22:42:43.814048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.814080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.814283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.814316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.814600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.814632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.814744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.814775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.814878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.814910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.815153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.815185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.815380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.815412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.815529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.815561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.815729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.815761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.815880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.815912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.816079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.816110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.816350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.816383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.816489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.816520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.816623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.816654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.816767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.816799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.816910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.816941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.817110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.817141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.817338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.817370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.817475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.817506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.817677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.817709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.817813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.817844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.818948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.818980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.819084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.819116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.819314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.819420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.819452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.819555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.819587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.819768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.819800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.819948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.820119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.820151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.820368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.820402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.820575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.820607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.820712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.820744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.820911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.820943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.328 qpair failed and we were unable to recover it. 00:36:54.328 [2024-12-16 22:42:43.821205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.328 [2024-12-16 22:42:43.821239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.821411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.821444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.821572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.821604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.821808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.821840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.822019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.822050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.822233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.822267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.822435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.822467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.822652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.822684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.822853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.822885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.823074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.823106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.823297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.823331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.823452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.823657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.823690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.823858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.823889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.823990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.824022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.824133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.824165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.824414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.824447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.824549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.824580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.824763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.824794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.824963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.824995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.825190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.825231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.825400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.825431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.825544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.825576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.825686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.825724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.825843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.825874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.825975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.826007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.826209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.826241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.826412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.826443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.826549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.826581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.826698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.826730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.826941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.826973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.827140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.827172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.827295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.827328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.827503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.329 [2024-12-16 22:42:43.827534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.329 qpair failed and we were unable to recover it. 00:36:54.329 [2024-12-16 22:42:43.827711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.827742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.827929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.827961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.828132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.828163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.828399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.828591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.828623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.828734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.828766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.828933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.828964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.829169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.829211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.829378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.829409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.829601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.829633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.829799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.829832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.829935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.829966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.830067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.830097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.830212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.830245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.830434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.830465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.830641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.830673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.830852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.830885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.830990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.831023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.831142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.831173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.831285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.831315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.831522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.831554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.831670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.831702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.831880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.831911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.832015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.832047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.832165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.832224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.832399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.832431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.832597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.832629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.832754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.832786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.833044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.833076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.833315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.833353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.833476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.833509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.833687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.833720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.833885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.833916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.834071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.834230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.834388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.834536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.834674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.330 [2024-12-16 22:42:43.834806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.330 qpair failed and we were unable to recover it. 00:36:54.330 [2024-12-16 22:42:43.834907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.834938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.835105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.835136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.835308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.835341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.835654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.835778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.835809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.835918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.835952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.836255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.836289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.836412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.836444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.836557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.836589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.836693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.836726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.836909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.836940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.837131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.837163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.837343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.837376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.837482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.837513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.837693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.837724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.837889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.837922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.838122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.838154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.838338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.838371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.838567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.838599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.838763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.838795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.838894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.838926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.839101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.839133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.839332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.839365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.839602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.839633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.839736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.839767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.839944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.839976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.840143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.840175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.840374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.840407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.840636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.840667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.840834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.840865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.841953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.841985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.842084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.842116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.842215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.331 [2024-12-16 22:42:43.842247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.331 qpair failed and we were unable to recover it. 00:36:54.331 [2024-12-16 22:42:43.842425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.842457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.842577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.842608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.842708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.842740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.842919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.842950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.843083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.843242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.843375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.843509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.843654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.843805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.843978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.844009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.844209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.844242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.844421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.844453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.844574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.844605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.844718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.844749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.844858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.844890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.844991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.845022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.845127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.845158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.845273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.845307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.845496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.845527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.845695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.845728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.845835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.845866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.845984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.846015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.846120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.846152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.846345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.846378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.846551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.846582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.846749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.846780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.846879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.846911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.847079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.847111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.847279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.847312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.847484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.847517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.847626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.847664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.847831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.847862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.847976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.848008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.848189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.848231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.848337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.848368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.848535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.848566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.848676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.848707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.848811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.332 [2024-12-16 22:42:43.848842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.332 qpair failed and we were unable to recover it. 00:36:54.332 [2024-12-16 22:42:43.849010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.849215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.849354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.849485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.849628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.849759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.849964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.849995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.850168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.850207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.850310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.850341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.850453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.850484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.850653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.850684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.850851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.850882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.850986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.851018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.851122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.851153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.851270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.851303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.851497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.851529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.851641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.851672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.851839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.851871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.852038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.852070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.852290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.852362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.852489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.852523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.852758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.852790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.852895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.852926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.853037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.853068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.853238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.853272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.853442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.853475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.853656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.853687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.853927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.853958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.854128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.854160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.854353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.854424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.854647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.854684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.854889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.854923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.855178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.855237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.855503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.855536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.855722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.333 [2024-12-16 22:42:43.855754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.333 qpair failed and we were unable to recover it. 00:36:54.333 [2024-12-16 22:42:43.855930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.855962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.856152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.856183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.856303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.856334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.856503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.856534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.856708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.856741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.856869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.856901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.857017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.857048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.857157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.857189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.857391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.857423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.857528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.857558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.857738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.857770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.857899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.857931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.858108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.858139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.858259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.858291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.858415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.858445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.858559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.858590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.858755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.858787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.858969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.859002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.859184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.859230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.859400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.859431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.859691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.859722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.859961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.859991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.860109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.860140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.860257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.860291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.860441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.860510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.860702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.860738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.860916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.860949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.861060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.861332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.861367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.861489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.861520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.861637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.861669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.861781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.861812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.861978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.862009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.862248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.862281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.862401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.862432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.862539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.862570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.862760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.862791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.863032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.863072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.863177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.334 [2024-12-16 22:42:43.863217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.334 qpair failed and we were unable to recover it. 00:36:54.334 [2024-12-16 22:42:43.863391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.863423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.863540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.863572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.863743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.863774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.864012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.864043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.864223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.864256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.864362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.864393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.864495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.864527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.864722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.864755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.864872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.864902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.865085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.865117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.865248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.865282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.865461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.865492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.865668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.865701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.865806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.865837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.866026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.866058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.866251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.866285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.866430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.866596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.866628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.866732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.866763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.866863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.866894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.867060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.867091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.867291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.867323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.867428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.867458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.867625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.867656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.867762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.867793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.868104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.868176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.868501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.868643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.868675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.868863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.868896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.869064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.869097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.869284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.869318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.869424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.869456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.869583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.869616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.869797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.869828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.869996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.870027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.870290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.870323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.870438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.870469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.870661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.870693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.870813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.870846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.335 qpair failed and we were unable to recover it. 00:36:54.335 [2024-12-16 22:42:43.870971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.335 [2024-12-16 22:42:43.871003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.871183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.871224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.871330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.871362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.871476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.871507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.871695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.871727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.871837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.871868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.872050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.872082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.872206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.872239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.872429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.872461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.872701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.872733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.872840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.872872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.872992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.873024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.873145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.873177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.873384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.873421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.873598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.873629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.873808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.873840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.874010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.874042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.874148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.874179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.874429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.874461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.874711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.874743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.874863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.874894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.875069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.875101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.875271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.875305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.875569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.875600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.875767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.875799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.875920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.875951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.876120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.876152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.876348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.876382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.876492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.876524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.876762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.876793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.876914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.876945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.877054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.877085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.877199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.877232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.877407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.877439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.877551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.877584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.877754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.877786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.877895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.878124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.878156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.878339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.878372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.878556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.336 [2024-12-16 22:42:43.878588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.336 qpair failed and we were unable to recover it. 00:36:54.336 [2024-12-16 22:42:43.878703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.878739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.878839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.878872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.878995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.879027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.879214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.879248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.879510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.879542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.879661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.879692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.879804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.879835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.880092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.880123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.880238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.880273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.880508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.880539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.880736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.880768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.880964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.880996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.881102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.881134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.881338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.881371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.881481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.881512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.881686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.881718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.881824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.881855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.882017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.882048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.882233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.882267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.882374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.882406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.882525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.882557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.882742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.882773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.882955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.882987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.883157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.883188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.883372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.883403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.883525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.883556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.883656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.883687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.883807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.883839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.883954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.883987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.884099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.884131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.884253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.884286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.884390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.884422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.884599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.884630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.884802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.884834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.884949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.884981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.885160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.885201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.885322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.885354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.885531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.885562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.337 qpair failed and we were unable to recover it. 00:36:54.337 [2024-12-16 22:42:43.885746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.337 [2024-12-16 22:42:43.885778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.885943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.885975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.886145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.886177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.886295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.886329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.886450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.886482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.886653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.886685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.886884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.886915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.887042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.887073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.887243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.887278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.887472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.887504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.887684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.887715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.887815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.887846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.888030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.888061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.888175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.888217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.888406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.888438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.888558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.888590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.888689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.888720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.888831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.888863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.889040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.889071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.889259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.889292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.889464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.889497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.889681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.889713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.889914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.889945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.890139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.890171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.890381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.890414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.890530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.890562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.890744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.891033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.891065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.891235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.891269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.891544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.891576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.891692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.891729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.891848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.891879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.892052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.892212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.892244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.892348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.892379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.892494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.892526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.338 [2024-12-16 22:42:43.892700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.338 [2024-12-16 22:42:43.892732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.338 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.892902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.892932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.893032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.893064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.893165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.893203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.893319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.893350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.893540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.893571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.893737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.893769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.893960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.893991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.894114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.894146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.894317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.894350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.894467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.894498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.894606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.894636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.894808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.894840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.894948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.894979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.895085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.895116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.895308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.895341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.895596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.895798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.895830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.896000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.896032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.896206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.896239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.896413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.896444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.896549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.896587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.896689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.896727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.896827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.896858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.897022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.897052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.897236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.897269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.897518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.897549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.897671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.897703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.897946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.897978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.898172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.898211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.898330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.898361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.898480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.898512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.898677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.898708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.898876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.898908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.899168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.899209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.899382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.899413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.899618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.899649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.899841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.899872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.900021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.900258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.900291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.339 qpair failed and we were unable to recover it. 00:36:54.339 [2024-12-16 22:42:43.900394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.339 [2024-12-16 22:42:43.900425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.900615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.900646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.900755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.900787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.900905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.900937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.901052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.901083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.901204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.901237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.901431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.901462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.901638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.901670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.901905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.901942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.902069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.902101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.902270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.902304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.902473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.902504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.902669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.902700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.902802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.902833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.903002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.903034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.903137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.903168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.903294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.903326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.903440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.903471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.903572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.903603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.903773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.903804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.904036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.904069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.904334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.904367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.904540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.904572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.904742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.904774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.905011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.905041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.905155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.905186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.905456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.905488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.905588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.905620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.905747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.905779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.906012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.906043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.906146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.906177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.906369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.906400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.906567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.906598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.906700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.906736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.906906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.906938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.907102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.907133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.907267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.907300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.907586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.907619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.907723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.907754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.340 qpair failed and we were unable to recover it. 00:36:54.340 [2024-12-16 22:42:43.907855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.340 [2024-12-16 22:42:43.907886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.908086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.908118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.908242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.908275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.908387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.908418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.908585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.908617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.908739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.908770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.909003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.909035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.909155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.909185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.909404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.909436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.909562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.909592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.909691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.909728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.909964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.909995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.910201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.910233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.910345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.910377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.910640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.910672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.910770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.910801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.910910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.910942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.911057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.911088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.911275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.911307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.911465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.911638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.911670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.911788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.911820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.911922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.911953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.912055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.912087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.912289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.912323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.912511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.912542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.912709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.912740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.912850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.912881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.912992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.913024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.913236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.913269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.913436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.913468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.913576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.913720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.913750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.913862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.913893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.914017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.914049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.341 qpair failed and we were unable to recover it. 00:36:54.341 [2024-12-16 22:42:43.914164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.341 [2024-12-16 22:42:43.914204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.914312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.914344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.914447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.914484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.914725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.914756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.914876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.914907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.915967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.915999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.916106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.916137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.916319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.916352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.916525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.916556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.916725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.916756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.916928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.916960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.917061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.917092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.917225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.917258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.917430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.917462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.917577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.917608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.917713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.917744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.917908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.917939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.918105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.918315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.918347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.918452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.918483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.918673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.918704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.918875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.918907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.919086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.919119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.919382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.919421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.919549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.919581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.919683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.919815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.919845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.919949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.919979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.342 qpair failed and we were unable to recover it. 00:36:54.342 [2024-12-16 22:42:43.920084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.342 [2024-12-16 22:42:43.920116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.920239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.920272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.920384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.920415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.920517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.920549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.920668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.920699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.920801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.920832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.920997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.921029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.921299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.921414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.921445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.921640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.921773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.921805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.921971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.922887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.922989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.923020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.923213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.923246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.923351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.923383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.923550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.923581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.923696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.923726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.923838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.923870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.924962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.924993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.925093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.925124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.925296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.925329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.925430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.925461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.925569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.925600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.925725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.925756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.925924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.925994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.926143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.926178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.926371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.926404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.926580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.343 [2024-12-16 22:42:43.926611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.343 qpair failed and we were unable to recover it. 00:36:54.343 [2024-12-16 22:42:43.926734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.926765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.926880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.926911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.927012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.927042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.927223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.927257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.927362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.927392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.927573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.927603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.927803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.927833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.928058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.928200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.928346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.928501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.928658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.928872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.928973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.929003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.929171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.929212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.929397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.929429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.929554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.929585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.929765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.929797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.929901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.930052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.930088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.930215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.930247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.930487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.930519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.930774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.930806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.930917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.930948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.931054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.931086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.931291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.931392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.931423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.931591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.931623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.931800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.931831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.932071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.932103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.932230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.932262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.932529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.932560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.932741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.932778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.932883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.932913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.933040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.933071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.933241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.933272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.933506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.933577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.933787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.344 [2024-12-16 22:42:43.933823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.344 qpair failed and we were unable to recover it. 00:36:54.344 [2024-12-16 22:42:43.933940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.933973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.934145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.934177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.934391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.934424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.934550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.934582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.934703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.934735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.934976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.935007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.935188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.935235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.935360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.935391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.935559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.935590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.935784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.935815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.935950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.935981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.936089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.936130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.936231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.936264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.936480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.936653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.936685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.936800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.936831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.936945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.936977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.937090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.937121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.937232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.937265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.937380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.937411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.937611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.937641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.937806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.937837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.938044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.938075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.938344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.938377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.938483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.938514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.938643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.938675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.938791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.938823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.938928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.938960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.939080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.939111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.939293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.939326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.939434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.939464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.939567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.939598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.939753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.939919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.939952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.940124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.940155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.940344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.940377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.940488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.940518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.940690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.940722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.940873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.940945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.941084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.941121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.941293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.941327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.941449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.941479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.345 qpair failed and we were unable to recover it. 00:36:54.345 [2024-12-16 22:42:43.941735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.345 [2024-12-16 22:42:43.941768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.941935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.941966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.942134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.942166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.942280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.942311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.942496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.942528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.942640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.942673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.942778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.942809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.942930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.942961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.943063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.943094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.943273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.943306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.943485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.943517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.943619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.943650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.943767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.943797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.943969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.944002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.944108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.944140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.944341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.944374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.944494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.944525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.944626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.944656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.944767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.944797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.944996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.945028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.945213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.945247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.945436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.945467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.945582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.945613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.945722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.945755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.945944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.945975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.946142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.946172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.946373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.946405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.946516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.946548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.946652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.946683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.946860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.946892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.946992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.947023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.947123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.947154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.947281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.947315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.947482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.947513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.947634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.947664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.947779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.947811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.948013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.948050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.948156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.346 [2024-12-16 22:42:43.948187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.346 qpair failed and we were unable to recover it. 00:36:54.346 [2024-12-16 22:42:43.948304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.948440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.948471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.948645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.948678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.948780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.948810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.948919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.948950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.949059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.949089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.949293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.949324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.949440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.949470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.949571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.949602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.949707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.949737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.949862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.949895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.950002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.950034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.950164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.950204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.950396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.950427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.950597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.950628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.950730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.950760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.950938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.950970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.951069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.951099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.951229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.951262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.951366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.951395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.951510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.951540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.951660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.951701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.951812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.951843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.952010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.952146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.952176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.952299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.952330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.952527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.952559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.952750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.952782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.952886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.952916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.953017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.347 [2024-12-16 22:42:43.953047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.347 qpair failed and we were unable to recover it. 00:36:54.347 [2024-12-16 22:42:43.953151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.953182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.953366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.953398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.953516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.953548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.953659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.953692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.953799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.953947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.953979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.954145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.954187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.954321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.954353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.954458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.954494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.954612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.954643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.954759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.954790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.954900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.954933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.955037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.955066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.955168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.955209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.955380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.955411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.955530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.955560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.955659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.955690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.955804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.955833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.956021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.956053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.956166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.956208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.956310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.956341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.956443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.956473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.956584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.956622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.956807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.956838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.957077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.957109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.957226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.957429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.957459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.957625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.957656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.957778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.957821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.957985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.958013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.958126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.958154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.958278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.958310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.958479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.958506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.958613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.958642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.958802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.958830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.959034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.959063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.959244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.959274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.959453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.959481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.959600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.348 [2024-12-16 22:42:43.959630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.348 qpair failed and we were unable to recover it. 00:36:54.348 [2024-12-16 22:42:43.959727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.959755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.959937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.959966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.960083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.960112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.960208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.960237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.960329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.960358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.960519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.960547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.960758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.960883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.960912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.961096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.961129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.961296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.961334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.961449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.961480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.961591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.961621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.961789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.961817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.961912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.961941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.962211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.962241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.962345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.962374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.962538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.962568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.962667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.962697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.962821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.962850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.962947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.962978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.963075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.963103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.963204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.963234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.963403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.963435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.963554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.963583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.963744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.963775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.963877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.963907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.964015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.964044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.964208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.964239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.964406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.964435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.964597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.964627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.964747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.964777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.964939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.964967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.965130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.965160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.965276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.965308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.965486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.965517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.965618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.965646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.965752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.965783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.965888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.965919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.349 [2024-12-16 22:42:43.966017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.349 [2024-12-16 22:42:43.966048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.349 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.966225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.966256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.966425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.966454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.966561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.966593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.966689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.966841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.966887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.966993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.967026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.967150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.967182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.967300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.967334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.967437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.967468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.967662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.967699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.967867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.967906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.968029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.968060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.968182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.968222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.968413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.968445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.968568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.968600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.968722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.968753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.968859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.968892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.969062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.969093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.969210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.969243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.969355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.969388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.969568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.969599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.969770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.969801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.969970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.970105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.970325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.970604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.970741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.970910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.971025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.971162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.971232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.971407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.971438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.971607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.971640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.971811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.971842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.971946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.971979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.972240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.972274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.972398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.972428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.972534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.972564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.350 [2024-12-16 22:42:43.972678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.350 [2024-12-16 22:42:43.972708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.350 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.972878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.972908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.973076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.973108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.973213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.973243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.973410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.973441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.973649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.973680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.973803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.973835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.973942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.973972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.974140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.974172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.974287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.974320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.974498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.974530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.974634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.974664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.974832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.974871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.974974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.975122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.975282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.975482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.975628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.975768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.975903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.975935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.976050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.976081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.976188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.976230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.976401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.976434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.976538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.976568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.976678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.976710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.976814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.976845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.977105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.977331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.977364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.351 [2024-12-16 22:42:43.977471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.351 [2024-12-16 22:42:43.977503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.351 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.977697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.977863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.977894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.978015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.978046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.978155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.978188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.978390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.978418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.978617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.978645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.978822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.978852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.978946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.978974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.979147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.979176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.979375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.979405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.979511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.979541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.979639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.979667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.979764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.979791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.979954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.979983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.980145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.980174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.980283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.980314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.980429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.980461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.980571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.980600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.980771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.980801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.980900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.980925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.981944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.981972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.982134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.982162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.982268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.982297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.982401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.982429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.637 qpair failed and we were unable to recover it. 00:36:54.637 [2024-12-16 22:42:43.982614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.637 [2024-12-16 22:42:43.982643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.982744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.982774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.982881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.982909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.983031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.983158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.983297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.983492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.983689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.983817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.983956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.984027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.984209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.984279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.984552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.984584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.984768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.984910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.984939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.985098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.985126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.985290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.985320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.985509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.985538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.985775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.985804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.985934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.985964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.986090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.986295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.986429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.986563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.986695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.986837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.986999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.987197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.987391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.987522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.987645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.987775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.987901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.987928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.988043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.988073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.988233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.988270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.988366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.988394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.638 [2024-12-16 22:42:43.988489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.638 [2024-12-16 22:42:43.988519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.638 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.988612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.988640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.988757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.988786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.988944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.988973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.989077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.989107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.989253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.989284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.989471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.989499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.989684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.989712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.989884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.989914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.990011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.990039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.990208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.990237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.990328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.990358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.990484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.990512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.990621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.990650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.990814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.990842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.991954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.991981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.992083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.992111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.992208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.992237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.992398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.992427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.992577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.992646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.992901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.993114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.993157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.993351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.993385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.993485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.993514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.993611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.993639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.993830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.993944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.993976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.994135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.994164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.994293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.994329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.994595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.994627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.994796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.994828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.639 qpair failed and we were unable to recover it. 00:36:54.639 [2024-12-16 22:42:43.995004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.639 [2024-12-16 22:42:43.995036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.995212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.995254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.995363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.995395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.995590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.995621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.995833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.995864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.996043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.996074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.996258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.996290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.996479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.996512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.996633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.996666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.996853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.996884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.997004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.997035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.997211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.997250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.997354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.997385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.997560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.997591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.997729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.997761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.997947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.997979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.998079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.998110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.998350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.998384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.998593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.998624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.998819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.998851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.999101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.999133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.999311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.999344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.999573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.999605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.999794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.999826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:43.999932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:43.999963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.000209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.000242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.000446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.000478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.000651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.000681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.000871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.000905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.001098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.001240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.001273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.001543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.001576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.001682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.001713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.001831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.001864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.002101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.002133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.002253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.002285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.002408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.002439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.640 qpair failed and we were unable to recover it. 00:36:54.640 [2024-12-16 22:42:44.002558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.640 [2024-12-16 22:42:44.002590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.002819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.002925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.002956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.003129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.003162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.003412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.003494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.003699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.003734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.003906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.004050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.004083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.004256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.004290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.004405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.004436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.004613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.004644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.004754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.004785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.005048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.005082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.005205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.005238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.005343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.005375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.005547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.005580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.005704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.005736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.005925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.005956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.006085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.006117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.006223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.006256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.006436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.006467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.006636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.006668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.006866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.006897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.007003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.007035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.007137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.007168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.007362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.007394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.007495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.007527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.007694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.007726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.007922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.007954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.641 [2024-12-16 22:42:44.008147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.641 [2024-12-16 22:42:44.008179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.641 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.008386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.008417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.008538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.008575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.008744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.008776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.008874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.008906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.009005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.009038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.009245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.009424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.009455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.009575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.009608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.009724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.009755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.009870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.009900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.010069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.010101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.010274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.010308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.010429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.010460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.010563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.010595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.010695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.010732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.010898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.010931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.011050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.011081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.011189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.011230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.011400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.011432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.011695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.011727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.011904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.011936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.012068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.012185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.012227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.012349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.012380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.012552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.012584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.012757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.012789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.012891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.012923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.013181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.013224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.013342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.013375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.013552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.013584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.013693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.013724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.013912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.013944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.014068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.014100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.014403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.014438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.014549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.014581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.014714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.014745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.642 [2024-12-16 22:42:44.014918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.642 [2024-12-16 22:42:44.014950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.642 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.015150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.015181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.015346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.015516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.015548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.015807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.015839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.016030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.016063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.016185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.016227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.016338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.016371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.016491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.016523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.016767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.016799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.016964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.016997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.017122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.017154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.017327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.017361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.017463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.017496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.017600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.017631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.017796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.017828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.017945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.017977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.018253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.018287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.018396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.018433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.018629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.018662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.018815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.018847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.018958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.018990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.019179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.019221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.019344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.019376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.019542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.019574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.019756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.019789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.019971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.020004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.020181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.020222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.020420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.020451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.020574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.020606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.020717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.020749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.021052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.021235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.021269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.021375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.021407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.021522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.021553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.021720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.021752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.021870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.021902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.022071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.643 [2024-12-16 22:42:44.022102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.643 qpair failed and we were unable to recover it. 00:36:54.643 [2024-12-16 22:42:44.022269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.022303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.022411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.022443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.022612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.022644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.022827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.022860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.022960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.022992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.023158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.023189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.023374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.023407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.023649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.023719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.023913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.023949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.024067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.024101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.024225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.024261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.024373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.024405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.024588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.024619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.024751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.024783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.024952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.024983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.025090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.025121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.025234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.025267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.025379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.025410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.025604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.025636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.025815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.025846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.026014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.026045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.026170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.026214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.026389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.026422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.026532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.026562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.026664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.026695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.026867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.027080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.027111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.027277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.027311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.027414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.027446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.027637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.027839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.027870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.027987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.028019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.028186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.028232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.028408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.028439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.028556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.028593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.028765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.028797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.028916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.028947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.029182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.029298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.029330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.644 qpair failed and we were unable to recover it. 00:36:54.644 [2024-12-16 22:42:44.029498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.644 [2024-12-16 22:42:44.029530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.029643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.029678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.029784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.029816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.029935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.029967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.030074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.030106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.030302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.030335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.030461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.030493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.030741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.030772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.030970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.031002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.031123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.031156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.031342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.031375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.031475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.031507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.031623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.031655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.031759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.031789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.032044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.032076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.032181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.032224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.032329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.032361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.032471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.032502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.032603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.032635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.032868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.032899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.033069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.033102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.033270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.033303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.033468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.033505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.033625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.033656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.033769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.033801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.033922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.033955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.034055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.034086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.034342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.034375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.034487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.034519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.034697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.034729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.034843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.034874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.035037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.035067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.035178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.035227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.035365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.035537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.035569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.035689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.035720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.035916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.035947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.036113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.036145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.036348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.036380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.645 qpair failed and we were unable to recover it. 00:36:54.645 [2024-12-16 22:42:44.036515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.645 [2024-12-16 22:42:44.036546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.036656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.036687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.036790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.036822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.037015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.037046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.037218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.037251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.037421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.037453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.037695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.037726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.037933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.037964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.038222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.038257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.038455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.038486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.038592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.038624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.038798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.038830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.039021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.039052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.039221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.039256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.039426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.039458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.039641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.039673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.039790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.039822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.039995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.040026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.040133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.040163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.040286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.040320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.040557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.040588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.040687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.040718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.040900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.040932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.041238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.041271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.041402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.041433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.041534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.041567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.041760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.041790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.041905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.041936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.042207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.042240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.042421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.042453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.042622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.042653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.042826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.646 [2024-12-16 22:42:44.042858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.646 qpair failed and we were unable to recover it. 00:36:54.646 [2024-12-16 22:42:44.042971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.043001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.043111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.043142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.043345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.043378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.043494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.043525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.043698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.043730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.043898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.043929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.044104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.044136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.044255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.044288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.044467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.044498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.044649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.044821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.044853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.044971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.045103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.045261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.045459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.045598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.045729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.046020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.046051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.046241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.046280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.046453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.046484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.046720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.046751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.046881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.046912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.047076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.047108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.047224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.047257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.047422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.047454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.047574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.047605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.047798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.047830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.048002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.048033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.048135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.048165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.048299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.048349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.048465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.048497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.048665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.048697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.048804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.048835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.049000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.049031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.049144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.049175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.049292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.049324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.049492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.049524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.647 [2024-12-16 22:42:44.049630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.647 [2024-12-16 22:42:44.049662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.647 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.049929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.050045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.050077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.050244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.050277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.050470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.050512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.050711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.050745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.050945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.051234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.051268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.051370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.051408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.051600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.051742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.051773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.051972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.052004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.052247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.052279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.052485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.052516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.052632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.052663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.052840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.052872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.052975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.053008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.053210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.053242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.053368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.053400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.053519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.053550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.053720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.053752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.053870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.053901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.054009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.054040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.054171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.054216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.054392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.054423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.054536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.054568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.054741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.054772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.054878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.054918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.055101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.055136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.055270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.055303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.055412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.055443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.055618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.055652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.055756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.055789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.055902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.055944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.056064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.056096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.056212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.056252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.648 [2024-12-16 22:42:44.056536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.648 [2024-12-16 22:42:44.056567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.648 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.056669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.056700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.056871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.056902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.057139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.057174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.057355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.057388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.057557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.057588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.057777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.057809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.057910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.057942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.058110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.058141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.058318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.058351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.058531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.058562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.058671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.058702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.058881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.058913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.059021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.059053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.059314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.059349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.059520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.059551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.059726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.059758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.060020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.060053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.060168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.060209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.060419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.060451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.060620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.060652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.060849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.060880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.061072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.061104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.061212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.061246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.061423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.061455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.061648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.061680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.061793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.061825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.062093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.062125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.062248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.062281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.062401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.062433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.062624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.062656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.062842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.062874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.063075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.063107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.063303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.063336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.063508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.063540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.063709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.063740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.063907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.063939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.064120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.064153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.649 [2024-12-16 22:42:44.064343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.649 [2024-12-16 22:42:44.064376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.649 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.064490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.064522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.064800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.064871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.065017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.065053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.065268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.065303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.065483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.065515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.065703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.065735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.065912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.065943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.066061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.066093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.066216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.066248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.066429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.066462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.066627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.066659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.066774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.066805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.066920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.066951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.067122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.067154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.067279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.067311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.067438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.067470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.067594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.067626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.067732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.067763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.067867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.067898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.068087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.068121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.068249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.068282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.068452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.068483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.068581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.068613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.068794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.068825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.068924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.068955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.069073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.069104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.069381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.069415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.069652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.069683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.069854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.069885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.069985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.070016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.070222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.070255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.070355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.070386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.070619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.070650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.070842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.070873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.070985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.071016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.650 [2024-12-16 22:42:44.071117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.650 [2024-12-16 22:42:44.071147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.650 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.071325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.071358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.071531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.071562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.071664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.071696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.071815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.071846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.072051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.072082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.072212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.072251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.072470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.072501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.072701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.072733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.072901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.072933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.073099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.073130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.073366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.073400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.073569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.073601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.073773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.073804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.073989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.074020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.074275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.074307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.074478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.074510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.074675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.074707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.074875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.074906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.075039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.075070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.075315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.075348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.075527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.075557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.075723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.075755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.075922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.075953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.076054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.076085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.076264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.076297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.076478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.076510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.076696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.076728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.076828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.076859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.077025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.077057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.077227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.077261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.077361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.077392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.077496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.077527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.651 qpair failed and we were unable to recover it. 00:36:54.651 [2024-12-16 22:42:44.077723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.651 [2024-12-16 22:42:44.077755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.077929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.077960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.078222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.078254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.078421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.078453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.078621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.078653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.078757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.078788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.078993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.079024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.079199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.079233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.079335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.079367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.079467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.079498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.079697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.079728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.079894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.079926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.080040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.080071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.080186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.080233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.080373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.080539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.080570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.080760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.080792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.080959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.080991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.081100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.081131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.081320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.081353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.081467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.081498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.081667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.081698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.081867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.081900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.082073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.082105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.082226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.082259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.082369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.082401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.082573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.082605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.082714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.082746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.082934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.082965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.083157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.083189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.083301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.083334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.083567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.083598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.083802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.083973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.084005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.084137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.084168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.084292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.084323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.084428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.652 [2024-12-16 22:42:44.084459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.652 qpair failed and we were unable to recover it. 00:36:54.652 [2024-12-16 22:42:44.084559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.084591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.084802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.084833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.085090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.085122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.085275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.085308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.085427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.085459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.085680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.085875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.085907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.086145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.086176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.086314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.086346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.086458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.086490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.086679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.086710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.086885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.087132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.087163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.087304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.087336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.087454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.087486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.087653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.087684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.087853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.087895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.088008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.088040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.088247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.088412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.088443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.088693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.088725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.088908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.088939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.089109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.089140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.089368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.089401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.089504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.089535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.089637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.089668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.089867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.089899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.090064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.090095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.090266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.090299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.090409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.090441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.090550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.090581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.090781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.090812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.090984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.091016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.091187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.091227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.091345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.091377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.091478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.091509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.091699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.091730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.091847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.091888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.092124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.653 [2024-12-16 22:42:44.092156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.653 qpair failed and we were unable to recover it. 00:36:54.653 [2024-12-16 22:42:44.092265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.092297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.092466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.092498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.092611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.092643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.092811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.092843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.093042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.093200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.093234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.093400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.093431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.093617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.093649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.093771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.093803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.093910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.093941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.094111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.094143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.094275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.094307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.094492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.094524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.094712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.094743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.094911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.094943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.095107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.095138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.095255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.095288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.095399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.095436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.095626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.095658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.095824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.095856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.095957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.095989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.096260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.096292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.096490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.096522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.096707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.096739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.096848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.097046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.097078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.097270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.097304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.097428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.097460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.097644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.097676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.097849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.097880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.098058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.098090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.098282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.098316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.098501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.098532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.098704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.098736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.098903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.098935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.099053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.099085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.099253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.654 [2024-12-16 22:42:44.099285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.654 qpair failed and we were unable to recover it. 00:36:54.654 [2024-12-16 22:42:44.099466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.099499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.099618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.099649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.099819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.099851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.100017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.100050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.100165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.100205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.100311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.100343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.100576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.100608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.100717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.100749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.100880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.100911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.101076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.101107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.101272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.101306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.101473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.101504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.101739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.101771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.101874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.101906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.102022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.102053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.102235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.102267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.102453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.102485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.102718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.102749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.102928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.102959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.103132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.103163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.103299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.103343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.103599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.103631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.103739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.103770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.103956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.103988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.104168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.104210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.104313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.104344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.104456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.104488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.104602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.104634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.104867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.104899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.105086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.105118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.105353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.105387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.105555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.105586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.105752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.105784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.105969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.655 [2024-12-16 22:42:44.106001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.655 qpair failed and we were unable to recover it. 00:36:54.655 [2024-12-16 22:42:44.106120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.106152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.106280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.106313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.106485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.106516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.106705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.106736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.106926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.106958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.107069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.107100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.107283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.107316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.107553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.107585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.107778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.107809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.107909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.107940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.108059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.108091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.108307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.108340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.108514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.108546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.108755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.108787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.108965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.108996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.109252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.109285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.109455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.109487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.109695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.109726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.109897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.109929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.110116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.110147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.110331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.110363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.110564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.110596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.110695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.110726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.110845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.110877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.111056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.111088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.111276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.111309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.111484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.111521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.111690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.111721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.111868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.111899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.112005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.112036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.112210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.112243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.112436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.112467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.112683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.112715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.112879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.112910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.113024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.113056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.113240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.113273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.656 [2024-12-16 22:42:44.113398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.656 [2024-12-16 22:42:44.113429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.656 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.113673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.113883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.113914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.114105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.114136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.114335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.114368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.114543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.114575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.114689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.114720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.114826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.114858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.115032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.115063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.115234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.115267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.115420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.115451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.115617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.115648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.115753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.115785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.116067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.116099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.116278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.116310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.116478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.116510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.116702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.116733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.116976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.117007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.117252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.117285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.117521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.117553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.117662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.117693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.117945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.117976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.118227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.118259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.118430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.118461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.118640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.118672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.118906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.118938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.119107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.119138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.119432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.119547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.119578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.119724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.119898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.119935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.120055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.120087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.120209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.120241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.120431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.120462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.120565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.120597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.120840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.120871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.657 [2024-12-16 22:42:44.121002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.657 [2024-12-16 22:42:44.121033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.657 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.121214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.121248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.121382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.121414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.121530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.121562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.121755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.121787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.121887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.121918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.122020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.122052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.122159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.122209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.122333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.122365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.122489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.122521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.122632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.122663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.122828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.122859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.123096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.123128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.123301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.123333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.123523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.123554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.123654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.123686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.123791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.123822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.123988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.124019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.124262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.124296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.124400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.124432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.124613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.124644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.124817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.124849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.125062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.125093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.125331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.125364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.125468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.125499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.125612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.125643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.125764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.125795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.125916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.125947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.126054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.126085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.126269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.126302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.126482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.126514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.126687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.126718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.126886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.126918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.127086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.127117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.127374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.127412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.127531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.127563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.127737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.127769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.127956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.127987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.658 [2024-12-16 22:42:44.128242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.658 [2024-12-16 22:42:44.128274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.658 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.128514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.128545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.128734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.128765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.128881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.128913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.129106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.129137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.129392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.129424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.129527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.129558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.129670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.129702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.129803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.129834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.130946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.130977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.131094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.131126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.131233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.131266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.131377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.131408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.131584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.131615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.131781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.131813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.131989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.132020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.132137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.132168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.132368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.132401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.132515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.132546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.132716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.132747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.132916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.132947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.133117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.133148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.133355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.133388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.133628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.133659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.133844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.133876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.134047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.134078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.134342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.134375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.134542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.134573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.134695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.134726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.134895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.134925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.135035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.135072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.135265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.135298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.659 qpair failed and we were unable to recover it. 00:36:54.659 [2024-12-16 22:42:44.135535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.659 [2024-12-16 22:42:44.135566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.135670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.135701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.135814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.135846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.136018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.136049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.136219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.136252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.136447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.136479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.136666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.136697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.136809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.136840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.137064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.137095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.137264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.137295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.137533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.137565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.137681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.137713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.137907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.137938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.138106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.138137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.138272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.138305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.138475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.138508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.138705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.138737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.138909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.138940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.139055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.139086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.139285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.139394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.139426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.139549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.139581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.139820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.139853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.140048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.140080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.140257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.140290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.140464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.140496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.140614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.140644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.140827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.140858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.141081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.141113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.141288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.141322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.141432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.141465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.141660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.141693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.141964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.141995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.142166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.660 [2024-12-16 22:42:44.142218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.660 qpair failed and we were unable to recover it. 00:36:54.660 [2024-12-16 22:42:44.142402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.142434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.142599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.142630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.142816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.142848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.143041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.143072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.143257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.143302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.143493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.143524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.143675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.143706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.143896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.143928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.144207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.144240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.144496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.144527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.144640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.144671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.144840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.144872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.145051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.145082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.145183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.145222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.145341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.145373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.145508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.145539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.145709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.145740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.145998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.146028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.146132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.146164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.146388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.146421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.146586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.146617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.146784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.146815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.146928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.146958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.147204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.147236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.147405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.147436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.147546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.147577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.147821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.147852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.148019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.148050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.148156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.148331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.148362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.148645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.148676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.148872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.148905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.149009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.149041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.149226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.149259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.149376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.661 [2024-12-16 22:42:44.149408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.661 qpair failed and we were unable to recover it. 00:36:54.661 [2024-12-16 22:42:44.149580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.149611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.149717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.149748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.149938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.149969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.150081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.150112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.150284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.150317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.150488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.150522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.150712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.150745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.150857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.150888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.150987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.151019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.151212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.151251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.151435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.151466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.151648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.151680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.151850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.151882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.151993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.152024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.152313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.152366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.152540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.152571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.152737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.152768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.152870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.152901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.153096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.153128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.153309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.153341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.153456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.153487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.153678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.153710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.153827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.153858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.154054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.154087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.154202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.154234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.154487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.154520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.154624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.154656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.154834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.154866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.155057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.155089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.155311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.155344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.155450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.155482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.155608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.155640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.155782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.155815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.156020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.156051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.156223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.156257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.156376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.156408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.156543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.156575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.156772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.156804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.156906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.662 [2024-12-16 22:42:44.156938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.662 qpair failed and we were unable to recover it. 00:36:54.662 [2024-12-16 22:42:44.157045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.157076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.157182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.157238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.157347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.157378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.157608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.157639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.157815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.157846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.158014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.158046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.158218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.158251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.158420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.158453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.158656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.158688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.158934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.158965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.159166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.159211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.159391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.159423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.159532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.159563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.159814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.159846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.160017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.160049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.160171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.160215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.160395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.160428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.160599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.160631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.160745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.160776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.160902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.160933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.161114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.161282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.161315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.161502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.161534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.161660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.161691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.161886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.161918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.162130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.162251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.162285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.162390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.162422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.162587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.162618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.162798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.162829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.163050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.163252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.163463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.163609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.163743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.663 [2024-12-16 22:42:44.163877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.663 qpair failed and we were unable to recover it. 00:36:54.663 [2024-12-16 22:42:44.163988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.164020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.164173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.164261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.164404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.164440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.164546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.164579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.164749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.164780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.164947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.164980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.165147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.165179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.165395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.165427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.165543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.165574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.165694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.165726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.165851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.165883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.166066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.166097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.166266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.166300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.166410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.166442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.166540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.166572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.166831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.166863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.167041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.167073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.167206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.167344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.167375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.167490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.167522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.167779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.167810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.167915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.167946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.168120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.168152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.168269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.168302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.168408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.168440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.168541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.168572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.168745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.168777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.168946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.168977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.169080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.169112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.169283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.169316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.169417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.169448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.169550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.169756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.169788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.169957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.169989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.170185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.170237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.170409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.170441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.170609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.664 [2024-12-16 22:42:44.170640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.664 qpair failed and we were unable to recover it. 00:36:54.664 [2024-12-16 22:42:44.170752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.170784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.170973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.171004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.171110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.171141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.171393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.171426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.171537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.171574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.171790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.171821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.171997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.172029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.172148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.172179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.172312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.172344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.172525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.172556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.172689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.172857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.172889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.173006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.173037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.173213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.173247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.173351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.173381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.173546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.173576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.173686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.173718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.173906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.173937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.174110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.174142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.174265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.174297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.174398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.174430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.174619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.174649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.174753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.174783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.174896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.174928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.175039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.175070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.175258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.175292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.175396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.175427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.175527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.175559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.175723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.175755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.175923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.175955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.176070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.176100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.176222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.176256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.176362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.176392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.176629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.665 [2024-12-16 22:42:44.176661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.665 qpair failed and we were unable to recover it. 00:36:54.665 [2024-12-16 22:42:44.176832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.176863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.177030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.177062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.177316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.177349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.177473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.177503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.177668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.177701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.177805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.177835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.177950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.177982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.178085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.178116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.178218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.178255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.178424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.178455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.178555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.178592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.178832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.178863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.178967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.178997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.179202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.179234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.179402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.179434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.179667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.179698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.179808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.179840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.180019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.180049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.180164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.180204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.180371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.180404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.180514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.180544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.180708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.180740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.180841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.180873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.181130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.181162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.181283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.181315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.181481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.181511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.181704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.181737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.181848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.181878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.182059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.182090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.182264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.182299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.182413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.182443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.182636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.182667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.182846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.182877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.183045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.183076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.183243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.183275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.183388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.183418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.183583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.666 [2024-12-16 22:42:44.183614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.666 qpair failed and we were unable to recover it. 00:36:54.666 [2024-12-16 22:42:44.183857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.183890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.184123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.184154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.184331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.184364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.184532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.184563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.184734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.184765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.184935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.184967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.185066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.185097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.185279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.185312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.185483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.185514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.185687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.185719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.185887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.185918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.186084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.186115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.186218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.186252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.186430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.186466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.186568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.186598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.186714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.186745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.186925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.186956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.187141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.187176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.187362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.187393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.187504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.187536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.187716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.187747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.187937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.188068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.188106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.188307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.188340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.188601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.188632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.188753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.188784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.188903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.188933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.189046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.189078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.189244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.189276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.189513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.189545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.189711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.189742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.189915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.189946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.190067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.190100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.190339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.190519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.190664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.190695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.190889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.190920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.191098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.191130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.191304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.667 [2024-12-16 22:42:44.191335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.667 qpair failed and we were unable to recover it. 00:36:54.667 [2024-12-16 22:42:44.191541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.191572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.191695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.191727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.191984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.192131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.192291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.192434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.192630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.192828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.192961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.192992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.193159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.193199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.193308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.193338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.193583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.193615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.193782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.193813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.193981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.194013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.194182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.194230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.194466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.194498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.194686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.194717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.194821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.194851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.195051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.195082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.195205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.195237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.195423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.195454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.195562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.195592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.195757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.195789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.195893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.195924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.196091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.196121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.196424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.196457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.196714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.196746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.196997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.197030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.197234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.197267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.197370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.197401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.197590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.197622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.197855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.197888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.198076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.198107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.198277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.198309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.198479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.198694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.198726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.198979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.199010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.199212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.199244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.199428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.199460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.668 qpair failed and we were unable to recover it. 00:36:54.668 [2024-12-16 22:42:44.199570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.668 [2024-12-16 22:42:44.199600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.199774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.199806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.200006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.200039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.200143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.200173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.200297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.200330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.200497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.200529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.200784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.200816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.200993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.201024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.201125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.201156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.201357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.201390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.201566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.201596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.201765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.201795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.201988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.202020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.202134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.202166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.202300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.202331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.202549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.202712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.202743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.202926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.202958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.203059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.203090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.203293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.203327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.203444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.203475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.203648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.203678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.203866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.203896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.204086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.204117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.204315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.204348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.204452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.204484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.204586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.204616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.204783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.204814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.205054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.205085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.205211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.205346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.669 [2024-12-16 22:42:44.205375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.669 qpair failed and we were unable to recover it. 00:36:54.669 [2024-12-16 22:42:44.205484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.205516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.205696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.205726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.205892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.205923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.206027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.206058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.206179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.206248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.206355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.206385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.206553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.206585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.206752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.206782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.206886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.206917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.207084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.207116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.207284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.207319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.207516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.207547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.207787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.208075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.208106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.208235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.208267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.208524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.208556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.208668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.208698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.208863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.208895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.209010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.209041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.209279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.209312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.209492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.209522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.209647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.209679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.209802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.209832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.210034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.210066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.210188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.210246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.210435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.210467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.210589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.210619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.210806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.210837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.211004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.211035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.211212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.211245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.211506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.211538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.211703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.211734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.211997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.212028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.212137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.212169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.212286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.212317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.212485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.212517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.212634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.212664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.670 qpair failed and we were unable to recover it. 00:36:54.670 [2024-12-16 22:42:44.212829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.670 [2024-12-16 22:42:44.212859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.213034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.213066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.213330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.213363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.213531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.213562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.213670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.213700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.213871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.213901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.214153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.214185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.214364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.214397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.214505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.214537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.214658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.214688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.214790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.214820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.214990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.215022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.215215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.215247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.215354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.215384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.215507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.215539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.215704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.215734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.215904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.215934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.216044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.216075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.216336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.216369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.216540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.216573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.216775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.216804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.216914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.216945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.217124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.217155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.217335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.217368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.217469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.217498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.217662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.217694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.217888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.217918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.218020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.218056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.218162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.218214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.218406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.218438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.218538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.218568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.218670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.218701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.218872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.218904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.219092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.219123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.219224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.219256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.219514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.219546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.219715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.219849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.219879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.219977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.220007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.220176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.220217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.220317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.220347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.220520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.220552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.671 [2024-12-16 22:42:44.220734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.671 [2024-12-16 22:42:44.220766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.671 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.220942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.220974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.221092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.221123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.221239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.221273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.221464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.221495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.221689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.221721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.221885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.221916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.222019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.222048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.222219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.222252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.222354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.222385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.222653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.222685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.222881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.222913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.222967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bc5e0 (9): Bad file descriptor 00:36:54.672 [2024-12-16 22:42:44.223244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.223316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.223447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.223481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.223670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.223704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.223819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.223851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.224022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.224053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.224245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.224279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.224397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.224622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.224652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.224771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.224800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.224974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.225005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.225123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.225154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.225346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.225379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.225492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.225523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.225724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.225755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.225932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.225964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.226176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.226221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.226355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.226546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.226578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.226698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.226730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.226943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.227050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.227082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.227317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.227350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.227467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.227498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.227606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.227638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.227828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.227859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.227979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.228011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.228208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.228248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.228355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.228387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.228554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.672 [2024-12-16 22:42:44.228585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.672 qpair failed and we were unable to recover it. 00:36:54.672 [2024-12-16 22:42:44.228752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.228784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.228959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.228989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.229094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.229125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.229298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.229331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.229441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.229473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.229640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.229671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.229796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.229827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.229936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.229967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.230089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.230119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.230310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.230342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.230509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.230540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.230657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.230690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.230789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.230819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.230929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.231130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.231161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.231285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.231317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.231484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.231516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.231756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.231788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.231906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.231937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.232074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.232105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.232220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.232254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.232444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.232475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.232582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.232613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.232723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.232755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.232864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.232895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.233061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.233092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.233261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.233294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.233493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.233525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.233645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.233676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.233793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.233825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.234031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.234063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.234231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.234264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.234389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.234420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.234524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.234556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.234737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.234769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.234879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.234910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.235081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.235112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.235290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.235329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.235506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.235537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.235703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.235734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.673 [2024-12-16 22:42:44.235834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.673 [2024-12-16 22:42:44.235866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.673 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.236040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.236071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.236201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.236234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.236407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.236440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.236555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.236586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.236689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.236721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.236895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.236927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.237040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.237070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.237238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.237272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.237397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.237429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.237618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.237650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.237830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.237863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.238032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.238066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.238326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.238359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.238470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.238501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.238672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.238704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.238814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.238846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.239091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.239122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.239227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.239260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.239362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.239394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.239576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.239607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.239844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.239875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.240050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.240081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.240201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.240234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.240432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.240465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.240579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.240610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.240780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.240811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.240922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.240953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.241072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.241104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.241214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.241247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.241347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.241381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.241554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.241587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.241694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.241726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.241827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.241858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.242058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.242090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.242328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.242361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.242595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.242626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.242750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.674 [2024-12-16 22:42:44.242787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.674 qpair failed and we were unable to recover it. 00:36:54.674 [2024-12-16 22:42:44.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.242993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.243094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.243124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.243318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.243349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.243461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.243491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.243666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.243695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.243798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.243828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.243933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.243963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.244136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.244166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.244347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.244377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.244556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.244586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.244753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.244783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.244886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.244916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.245017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.245047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.245287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.245319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.245425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.245622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.245651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.245833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.245864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.246030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.246059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.246225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.246256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.246371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.246401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.246571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.246600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.246703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.246732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.246899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.246930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.247044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.247074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.247242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.247273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.247378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.247409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.247528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.247559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.247739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.247768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.247885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.247915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.248087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.248116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.248227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.248258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.248426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.248456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.248651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.248682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.248782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.248813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.248927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.248956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.249130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.249160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.249347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.249379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.249550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.249580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.249684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.249715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.249889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.675 [2024-12-16 22:42:44.249926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.675 qpair failed and we were unable to recover it. 00:36:54.675 [2024-12-16 22:42:44.250094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.250125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.250227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.250257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.250433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.250463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.250567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.250598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.250716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.250746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.250950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.250982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.251159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.251189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.251304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.251334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.251502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.251533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.251699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.251730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.251847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.251878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.251980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.252010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.252204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.252235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.252522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.252553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.252720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.252750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.252868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.252898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.253066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.253097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.253365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.253396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.253603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.253632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.253753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.253783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.253894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.253924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.254027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.254057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.254328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.254359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.254477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.254508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.254617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.254648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.254755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.254784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.254899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.254934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.255175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.255237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.255403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.255433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.255602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.255631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.255809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.255840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.255940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.255971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.256129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.256159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.256271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.256302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.256428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.256458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.256624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.256653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.256760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.256789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.256957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.256987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.257205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.257236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.257411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.257446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.257652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.257683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.257881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.676 [2024-12-16 22:42:44.257911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.676 qpair failed and we were unable to recover it. 00:36:54.676 [2024-12-16 22:42:44.258151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.258183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.258301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.258331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.258437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.258468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.258586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.258615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.258848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.258878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.259083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.259252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.259283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.259542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.259573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.259683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.259713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.259925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.259955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.260128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.260157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.260350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.260382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.260655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.260684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.260918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.260947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.261213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.261247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.261421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.261451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.261628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.261660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.261911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.261942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.262143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.262173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.262367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.262398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.262616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.262863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.262893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.263093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.263122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.263242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.263274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.263409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.263439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.263611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.263641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.263757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.263787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.263956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.264164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.264201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.264316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.264346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.264474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.264504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.264721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.264754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.264855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.264888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.265055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.265085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.265215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.265255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.265457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.265487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.265659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.265689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.265808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.265845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.265956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.265987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.266086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.266115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.266317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.266350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.266619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.266652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.266891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.267096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.677 [2024-12-16 22:42:44.267125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.677 qpair failed and we were unable to recover it. 00:36:54.677 [2024-12-16 22:42:44.267304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.267336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.267623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.267655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.267925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.267962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.268276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.268314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.268530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.268564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.268766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.268806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.268980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.269010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.269280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.269313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.269428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.269458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.269593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.269625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.269834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.269867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.270140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.270172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.270383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.678 [2024-12-16 22:42:44.270413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.678 qpair failed and we were unable to recover it. 00:36:54.678 [2024-12-16 22:42:44.270615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 542045 Killed "${NVMF_APP[@]}" "$@" 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542746 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542746 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542746 ']' 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:54.943 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.943 [2024-12-16 22:42:44.521685] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:54.943 [2024-12-16 22:42:44.521725] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:54.943 [2024-12-16 22:42:44.565622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.566025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.566069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.566302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.566336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.566446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.566477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.566669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.566699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.566897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.566927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.567052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.567083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.567255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.567288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.567398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.567426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.567543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.567573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.567710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.567740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.943 qpair failed and we were unable to recover it. 00:36:54.943 [2024-12-16 22:42:44.567935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.943 [2024-12-16 22:42:44.567965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.568091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.568122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.568317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.568352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.568466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.568497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.568620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.568651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.568822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.568854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.568968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.568999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.569111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.569143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.569267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.569302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.569457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.569629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.569660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.569807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.569908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.569939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.570043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.570074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.570245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.570317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.570485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.570556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.570801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.570874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.571072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.571109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.571288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.571323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.571571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.571603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.571741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.571773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.571907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.571937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.572094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.572126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.572232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.572265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.572462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.572493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.572686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.572716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.572890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.572919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.573096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.573137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.573265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.573299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.573497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.573527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.573696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.573727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.573917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.573948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.574056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.574087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.574284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.574316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.574435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.574466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.944 [2024-12-16 22:42:44.574632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.944 [2024-12-16 22:42:44.574663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.944 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.574833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.574865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.574982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.575011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.575121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.575153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.575332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.575366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.575468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.575498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.575691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.575724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.575892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.575924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.576092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.576123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.576318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.576350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.576536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.576567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.576736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.576768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.576944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.576976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.577165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.577205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.577328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.577359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.577533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.577682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.577715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.577826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.577857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.578057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.578260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.578303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.578430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.578463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.578650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.578681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.578951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.579058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.579089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.579262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.579295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.579520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.579552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.579667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.579697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.579883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.579914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.580018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.580048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.580223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.580256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.580469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.580499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.580599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.580630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.580864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.580895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.581007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.581038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.581212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.581244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.581376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.581408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.581590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.581621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.581743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.581773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.581890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.945 [2024-12-16 22:42:44.581921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.945 qpair failed and we were unable to recover it. 00:36:54.945 [2024-12-16 22:42:44.582048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.582079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.582259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.582294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.582470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.582501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.582601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.582633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.582804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.582836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.582983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.583099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.583130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.583411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.583450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.583566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.583597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.583708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.583738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.583911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.583942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.584129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.584160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.584365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.584402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.584520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.584552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.584668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.584700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.584819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.584851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.584961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.584992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.585159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.585201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.585390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.585421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.585541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.585573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.585675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.585706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.585878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.585909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.586011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.586042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.586213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.586246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.586416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.586448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.586552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.586583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.586756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.586787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.586892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.586923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.587063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.587253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.587287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.587396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.587427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.587532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.587563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.587666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.587696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.587891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.587921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.588085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.588122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.588332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.588364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.588552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.588584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.588771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.588974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.946 [2024-12-16 22:42:44.589014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.946 qpair failed and we were unable to recover it. 00:36:54.946 [2024-12-16 22:42:44.589276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.589311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.589487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.589517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.589716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.589747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.589859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.589890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.589993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.590024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.590127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.590157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.590349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.590382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.590501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.590531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.590697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.590734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.590908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.590940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.591109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.591141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.591295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.591329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.591557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.591588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.591707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.591737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.591854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.591886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.592076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.592107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.592274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.592308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.592509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.592541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.592640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.592673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.592869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.592903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.593031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.593062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.593177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.593217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.593349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.593381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.593561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.593593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.593743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.593909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.593941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.594111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.594142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.594338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.594371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.594483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.594514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.594613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.594645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.594813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.594844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.595008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.595041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.595210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.595243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.595346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.595377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.595476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.595508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.595721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.595765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.595890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.595926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.947 qpair failed and we were unable to recover it. 00:36:54.947 [2024-12-16 22:42:44.596034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.947 [2024-12-16 22:42:44.596064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.596237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.596270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.596448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.596480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.596646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.596678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.596907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.596939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.597113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.597146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.597326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.597359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.597472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.597502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.597699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.597730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.597955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.598066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.598098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.598272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.598307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.598510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.598541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.598671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.598702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.598896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.598927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.598970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:54.948 [2024-12-16 22:42:44.599101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.599132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.599337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.599369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.599558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.599590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.599774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.599806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.599975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.600006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.600205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.600237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.600431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.600463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.600698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.600730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.600842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.600873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.600984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.601125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.601273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.601475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.601682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.601814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.601962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.601994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.602252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.602286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.602529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.602560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.948 qpair failed and we were unable to recover it. 00:36:54.948 [2024-12-16 22:42:44.602792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.948 [2024-12-16 22:42:44.602823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.602923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.602953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.603120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.603151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.603280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.603312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.603426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.603457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.603637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.603672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.603840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.603871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.603984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.604016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.604259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.604293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.604393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.604423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.604662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.604875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.604907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.605115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.605358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.605391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.605582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.605614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.605790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.605822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.605943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.605974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.606084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.606118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.606224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.606257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.606377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.606410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.606524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.606556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.606746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.606777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.606888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.606920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.607101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.607132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.607259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.607293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.607461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.607493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.607595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.607627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.607798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.607830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.608030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.608063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.608243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.608278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.608389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.608420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.608592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.608625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.608886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.608919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.609116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.609151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.609262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.609295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.609477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.609509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.609678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.949 [2024-12-16 22:42:44.609711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.949 qpair failed and we were unable to recover it. 00:36:54.949 [2024-12-16 22:42:44.609814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.609846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.610035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.610066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.610234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.610268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.610385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.610417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.610530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.610562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.610668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.610699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.610800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.610832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.611024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.611055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.611221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.611255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.611376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.611418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.611536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.611568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.611690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.611723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.611832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.611863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.612049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.612081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.612182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.612222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.612391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.612423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.612529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.612561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.612758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.612791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.612963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.612994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.613161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.613202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.613457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.613489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.613726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.613758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.613874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.613913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.614084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.614115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.614354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.614388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.614570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.614603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.614716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.614747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.615007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.615040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.615215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.615249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.615378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.615409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.615596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.615628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.615831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.615864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.616060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.616093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.616210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.616244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.616415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.616448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.616555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.616588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.616716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.616748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.616915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.616949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.950 [2024-12-16 22:42:44.617117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.950 [2024-12-16 22:42:44.617149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.950 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.617274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.617307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.617569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.617603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.617836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.617867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.617972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.618007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.618136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.618172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.618368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.618404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.618648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.618686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.618861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.618892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.618993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.619025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.619206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.619239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.619440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.619477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.619748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.619785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.619909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.619943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.620137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.620171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.620353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.620389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.620506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.620540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.620730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.620763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.620931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.620963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.621029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:54.951 [2024-12-16 22:42:44.621062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:54.951 [2024-12-16 22:42:44.621070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:54.951 [2024-12-16 22:42:44.621076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:54.951 [2024-12-16 22:42:44.621082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:54.951 [2024-12-16 22:42:44.621210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.621245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.621423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.621456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.621571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.621603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.621790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.621822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.621939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.621970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.622071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.622114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.622308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.622343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.622495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.622526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.622663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.622590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:54.951 [2024-12-16 22:42:44.622696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:54.951 [2024-12-16 22:42:44.622800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:54.951 [2024-12-16 22:42:44.622854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.622886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.951 [2024-12-16 22:42:44.622801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.623092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.623139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.623274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.623309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.623519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.623551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.623717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.623749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.623879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.623911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.624093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.624125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.624261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.951 [2024-12-16 22:42:44.624296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.951 qpair failed and we were unable to recover it. 00:36:54.951 [2024-12-16 22:42:44.624411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.624443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.624706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.624737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.624843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.624875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.625050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.625083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.625220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.625253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.625457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.625490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.625677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.625709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.625836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.625867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.625998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.626029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.626147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.626179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.626358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.626390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.626578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.626611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.626733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.626765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.626932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.626963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.627130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.627161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.627397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.627443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.627568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.627600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.627768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.627800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.627903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.627935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.628055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.628086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.628211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.628243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.628410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.628442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.628618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.628649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.628823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.628855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.628971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.629003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.629106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.629145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.629324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.629357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.629458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.629490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.629659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.629690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.629881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.629913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.630072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.630103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.630274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.630308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.630495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.630527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.952 [2024-12-16 22:42:44.630707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.952 [2024-12-16 22:42:44.630739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.952 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.630908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.630941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.631059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.631090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.631275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.631308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.631571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.631604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.631807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.631839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.632051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.632084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.632277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.632313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.632523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.632556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.632846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.633025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.633057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.633300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.633333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.633522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.633555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.633748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.633781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.633888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.633921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.634038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.634071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.634213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.634247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.634459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.634492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.634597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.634629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.634842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.634891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.635076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.635109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.635228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.635263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.635453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.635599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.635631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.635745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.635776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.635897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.635929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.636063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.636095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.636210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.636243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.636426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.636458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.636564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.636596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.636718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.636750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.636917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.636949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.637135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.637167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.637285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.637318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.637499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.637531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.637648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.637680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.637899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.637937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.953 [2024-12-16 22:42:44.638124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.953 [2024-12-16 22:42:44.638160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:54.953 qpair failed and we were unable to recover it. 00:36:54.954 [2024-12-16 22:42:44.638372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.954 [2024-12-16 22:42:44.638411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.954 qpair failed and we were unable to recover it. 00:36:54.954 [2024-12-16 22:42:44.638629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.954 [2024-12-16 22:42:44.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.954 qpair failed and we were unable to recover it. 00:36:54.954 [2024-12-16 22:42:44.638843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.954 [2024-12-16 22:42:44.638875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.954 qpair failed and we were unable to recover it. 00:36:54.954 [2024-12-16 22:42:44.639060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.954 [2024-12-16 22:42:44.639092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.954 qpair failed and we were unable to recover it. 00:36:54.954 [2024-12-16 22:42:44.639319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.954 [2024-12-16 22:42:44.639355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.954 qpair failed and we were unable to recover it. 00:36:54.954 [2024-12-16 22:42:44.639535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.954 [2024-12-16 22:42:44.639568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:54.954 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.639816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.639848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.640018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.640050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.640175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.640221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.640392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.640424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.640679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.640720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.640911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.640943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.641166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.225 qpair failed and we were unable to recover it. 00:36:55.225 [2024-12-16 22:42:44.641288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.225 [2024-12-16 22:42:44.641321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.641500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.641533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.641707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.641738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.641859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.641894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.642130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.642162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.642283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.642319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.642542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.642575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.642697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.642729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.642844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.642882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.643053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.643086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.643203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.643236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.643348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.643381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.643491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.643523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.643709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.643740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.643839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.643870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.644057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.644089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.644258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.644292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.644489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.644521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.644697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.644729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.644842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.644873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.645077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.645109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.645293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.645327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.645503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.645536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.645733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.645765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.645866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.645898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.646082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.646326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.646361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.646530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.646672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.646704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.646969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.647001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.647190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.647232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.647420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.647454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.647560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.647592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.647692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.647724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.647833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.647866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.648042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.648075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.648252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.648287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.648522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.648555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.648672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.648705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.226 qpair failed and we were unable to recover it. 00:36:55.226 [2024-12-16 22:42:44.648978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.226 [2024-12-16 22:42:44.649011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.649114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.649146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.649360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.649394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.649561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.649594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.649798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.649831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.650023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.650056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.650262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.650296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.650397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.650438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.650602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.650635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.650839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.650879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.650986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.651018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.651213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.651249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.651384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.651416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.651538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.651571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.651754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.651787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.651891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.651924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.652115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.652148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.652351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.652386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.652510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.652542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.652660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.652692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.652858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.652890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.653073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.653107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.653281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.653318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.653493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.653526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.653631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.653664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.653844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.653877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.654004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.654037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.654153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.654186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.654301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.654334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.654596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.654629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.654821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.654855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.655027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.655059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.655247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.655283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.655454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.655487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.655590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.655622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.655750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.655783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.655982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.656037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.656165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.656205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.656326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.227 [2024-12-16 22:42:44.656359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.227 qpair failed and we were unable to recover it. 00:36:55.227 [2024-12-16 22:42:44.656484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.656517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.656686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.656719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.656887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.656920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.657183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.657224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.657393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.657427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.657701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.657735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.657926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.657959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.658209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.658242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.658422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.658455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.658633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.658665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.658950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.658981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.659177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.659220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.659456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.659488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.659723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.659754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.659952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.660212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.660244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.660412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.660444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.660715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.660746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.660871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.660902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.661086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.661117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.661234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.661267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.661386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.661418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.661592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.661624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.661797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.661829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.661956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.661996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.662178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.662217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.662476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.662508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.662677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.662709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.662968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.663001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.663114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.663145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.663352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.663386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.663582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.663615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.663852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.663884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.664121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.664156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.664479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.664518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.664794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.664828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.665096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.665130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.665416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.665450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.665626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.228 [2024-12-16 22:42:44.665659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.228 qpair failed and we were unable to recover it. 00:36:55.228 [2024-12-16 22:42:44.665917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.665949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.666201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.666234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.666404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.666437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.666607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.666642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.666816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.666849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.667123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.667156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.667436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.667472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.667746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.667781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.668021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.668054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.668294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.668572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.668606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.668799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.668832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.669100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.669133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.669414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.669449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.669699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.669731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.669971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.670003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.670186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.670226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.670426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.670458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.670723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.670754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.671034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.671066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.671370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.671402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.671655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.671687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.671928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.671959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.672230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.672264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.672501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.672534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.672703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.672742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.672929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.672961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.673221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.673254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.673444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.673476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.673735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.673768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.674020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.674052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.674223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.674256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.674492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.674525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.674693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.674725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.674967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.675000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.675167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.675208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.675395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.675426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.675670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.675702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.229 [2024-12-16 22:42:44.675888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.229 [2024-12-16 22:42:44.675921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.229 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.676202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.676237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.676439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.676472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.676686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.676719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.676954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.676986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.677239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.677275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.677443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.677475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.677707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.677739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.677906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.677939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.678207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.678242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.678426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.678459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.678651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.678685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.678869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.678900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.679186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.679233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.679412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.679445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.679706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.679739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.679922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.679955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.680202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.680237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.680492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.680524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.680758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.680789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.680957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.680989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.681177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.681219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.681386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.681419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.681591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.681623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.681812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.681843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.682029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.682060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.682320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.682599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.682644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.682887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.682919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.683208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.683240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.683497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.683529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.683643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.683674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.230 [2024-12-16 22:42:44.683932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.230 [2024-12-16 22:42:44.683963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.230 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.684129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.684161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.684336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.684370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.684654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.684944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.684975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.685276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.685310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.685562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.685593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.685853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.685885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.686052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.686083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.686278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.686311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.686477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.686508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.686780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.686811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.687013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.687044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.687230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.687263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.687430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.687462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.687744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.687775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.688009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.688040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.688210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.688241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.688421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.688459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.688629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.688661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.688863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.688895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.689105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.689136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.689388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.689421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.689594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.689624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.689735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.689766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.690015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.690046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.690304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.690338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.690526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.690557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.690676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.690707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.690876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.690907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.691072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.691103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.691360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.691392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.691601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.691633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.691892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.691923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.692175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.692214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.692385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.692423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.692625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.692656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.692821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.231 [2024-12-16 22:42:44.692852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.231 qpair failed and we were unable to recover it. 00:36:55.231 [2024-12-16 22:42:44.693037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.693069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.693363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.693587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.693618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.693869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.693900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.694184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.694227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.694499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.694531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.694791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.694822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.695079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.695110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.695238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.695272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.695526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.695558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.695833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.695865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.696165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.696205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.696462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.696493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.696776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.696808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.697087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.697117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.697295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.697328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.697578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.697609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.697841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.697873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.698059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.698090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.698297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.698553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.698584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.698799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.698830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.698998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.699029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.699285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.699318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.699513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.699545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.699665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.699696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.699954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.699985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.700270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.700304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.700576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.700607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.700889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.700920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.701203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.701235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.701404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.701436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.701564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.701595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.701849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.701881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.702147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.702325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.702357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.702612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.702643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.702855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.702892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.703151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.232 [2024-12-16 22:42:44.703182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.232 qpair failed and we were unable to recover it. 00:36:55.232 [2024-12-16 22:42:44.703311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.703344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.703602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.703633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.703823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.703854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.704062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.704093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.704282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.704315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.704507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.704538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.704729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.704759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.705035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.705067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.705338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.705370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.705651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.705682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.705801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.705833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.706066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.706097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.706381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.706414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.706598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.706629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.706891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.706923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.707106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.707137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.707256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.707288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.707560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.707591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.707872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.707903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.708083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.708114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.708399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.708516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.708811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.708842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.709125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.709156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.709361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.709394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.709655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.709685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.709872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.709903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.710070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.710101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.710383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.710416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.710585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.710616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.710798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.710829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.711004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.711035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.711244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.711277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.711550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.711581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.711747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.711778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.711957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.711988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.712177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.712220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.712421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.712631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.233 [2024-12-16 22:42:44.712668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.233 qpair failed and we were unable to recover it. 00:36:55.233 [2024-12-16 22:42:44.712922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.712953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.713187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.713228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.713476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.713508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.713696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.713728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.713987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.714018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.714277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.714311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.714487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.714518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.714685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.714715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.714895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.714927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.715160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.715200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.715371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.715403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.715574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.715605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.715771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.715802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.716020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.716051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.716308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.716340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.716624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.716655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.716910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.716941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.717242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.717275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.717539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.717570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.717738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.717769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.718002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.718033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.718206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.718239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.718491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.718523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.718778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.718810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.718992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.719023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.719210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.719242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.719504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.719535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:55.234 [2024-12-16 22:42:44.719820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.719851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:55.234 [2024-12-16 22:42:44.720123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.720155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:55.234 [2024-12-16 22:42:44.720438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.720694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.720726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.234 [2024-12-16 22:42:44.720890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.234 [2024-12-16 22:42:44.720922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.234 qpair failed and we were unable to recover it. 00:36:55.235 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.235 [2024-12-16 22:42:44.721176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.721218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.721417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.721448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.721694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.721726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.721983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.722014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.722240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.722274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.722455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.722492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.722681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.722712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.722950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.722982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.723233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.723266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.723444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.723476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.723655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.723687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.723971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.724002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.724304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.724336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.724573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.724605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.724786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.724818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.725076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.725108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.725301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.725520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.725552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.725726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.725757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.725950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.725984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.726153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.726185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.726398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.726430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.726622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.726654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.726787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.726819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.726929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.726961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.727129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.727160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe194000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.727425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.727478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.727653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.727684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.727828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.727859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.727969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.728125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.728284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.728427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.728568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.728768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.728967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.728998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.729097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.729130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.235 [2024-12-16 22:42:44.729338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.235 [2024-12-16 22:42:44.729372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.235 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.729554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.729585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.729701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.729733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.729836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.729867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.730962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.730993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.731113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.731144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.731396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.731433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.731554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.731585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.731697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.731728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.731827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.731859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.732092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.732124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.732300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.732333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.732503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.732536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.732641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.732672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.732840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.732872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.733073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.733105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.733332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.733487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.733518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.733758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.733789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.734080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.734113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.734235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.734268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.734368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.734400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.734652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.734684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.734860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.734892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.735056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.735087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.735345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.735377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.735498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.735530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.735815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.735847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.736011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.736043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.736279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.736314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.736573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.736607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.736860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.736892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.737087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.737119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.236 [2024-12-16 22:42:44.737287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.236 [2024-12-16 22:42:44.737320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.236 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.737502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.737534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.737664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.737697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.737903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.737935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.738119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.738151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.738281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.738315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.738427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.738460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.738673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.738704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.738821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.738853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.739112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.739149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.739407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.739440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.739628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.739660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.739936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.739968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.740230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.740263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.740434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.740464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.740587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.740618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.740786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.740819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.740939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.740970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.741154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.741185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.741329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.741361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.741529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.741560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.741680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.741712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.741977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.742009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.742182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.742222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.742341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.742373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.742473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.742505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.742655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.742850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.742882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.743050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.743251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.743284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.743420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.743453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.743648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.743680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.743898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.743929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.744030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.744061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.744329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.744363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.744548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.744580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.744705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.744738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.744995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.745027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.745261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.745295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.745399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.237 [2024-12-16 22:42:44.745431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.237 qpair failed and we were unable to recover it. 00:36:55.237 [2024-12-16 22:42:44.745566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.745597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.745784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.745815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.746007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.746039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.746299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.746333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.746532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.746563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.746680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.746712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.746812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.746844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.747106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.747137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.747312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.747345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.747465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.747503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.747613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.747645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.747941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.747972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.748088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.748119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.748289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.748322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.748515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.748547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.748670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.748702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.748805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.748837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.749121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.749152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.749353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.749387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.749515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.749548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.749715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.749746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.749980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.750012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.750214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.750247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.750428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.750461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.750585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.750616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.750753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.750784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.751062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.751096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.751212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.751246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.751453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.751487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.751684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.751716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.752018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.752050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.752236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.752269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.752409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.752441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.752584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.752616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.752847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.752879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.753111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.753142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.753423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.753457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.753646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.753678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.753922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.238 [2024-12-16 22:42:44.753954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.238 qpair failed and we were unable to recover it. 00:36:55.238 [2024-12-16 22:42:44.754218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.754252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.754458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.754490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.754681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.754713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.754890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.754924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:55.239 [2024-12-16 22:42:44.755179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.755228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.755328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.755361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:55.239 [2024-12-16 22:42:44.755629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.755661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.239 [2024-12-16 22:42:44.755782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.755814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.755979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.756012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.239 [2024-12-16 22:42:44.756256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.756292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.756485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.756516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.756643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.756674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.756931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.756963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.757220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.757253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.757439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.757470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.757590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.757623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.757819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.757850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.758105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.758137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.758255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.758286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.758456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.758488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.758627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.758658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.758914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.758946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.759139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.759171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.759325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.759357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.759569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.759599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.759737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.759769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.759973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.760004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.760238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.760271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.760411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.760446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.760554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.760586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.760707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.760738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.761027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.239 [2024-12-16 22:42:44.761059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.239 qpair failed and we were unable to recover it. 00:36:55.239 [2024-12-16 22:42:44.761316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.761349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.761520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.761552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.761670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.761701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.762004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.762353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.762392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.762588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.762620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.762823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.763081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.763114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.763318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.763350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.763539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.763571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.763758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.763790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.763985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.764017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.764275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.764309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.764498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.764529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.764629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.764662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.764847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.764878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.765122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.765153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ae6a0 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.765471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.765522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.765721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.765754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.766021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.766054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.766253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.766288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.766458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.766490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.766624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.766656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.766792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.766824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.766942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.766974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.767078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.767110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.767276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.767309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.767474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.767596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.767628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.767752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.767784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.767964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.768165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.768208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.768396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.768427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.768539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.768571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.768854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.768886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.769119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.769151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.769345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.769378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.769508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.769540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.769706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.240 [2024-12-16 22:42:44.769737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.240 qpair failed and we were unable to recover it. 00:36:55.240 [2024-12-16 22:42:44.769934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.769966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.770210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.770244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.770411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.770443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.770615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.770647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.770824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.770862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.770976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.771008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.771207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.771240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.771436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.771468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.771654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.771687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.772004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.772036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.772275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.772309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.772445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.772477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.772628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.772794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.772826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.773002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.773034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.773148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.773181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.773451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.773483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.773672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.773705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.773970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.774003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.774169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.774210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.774502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.774788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.774820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.775083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.775115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.775335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.775369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.775508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.775540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.775809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.775841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.776027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.776059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.776313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.776374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.776571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.776603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.776774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.776806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.777073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.777268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.777303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.777567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.777599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.777837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.777870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.778173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.778215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.778467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.778500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.778783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.778816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.778930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.778961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.779217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.241 qpair failed and we were unable to recover it. 00:36:55.241 [2024-12-16 22:42:44.779539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.241 [2024-12-16 22:42:44.779571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.779867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.779899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.780160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.780202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.780380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.780413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.780676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.780707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.780987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.781025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.781147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.781180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.781451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.781484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.781681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.781713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.781969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.782001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.782214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.782247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.782438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.782470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.782701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.782733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.782916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.782948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.783068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.783099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.783269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.783303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.783483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.783515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.783789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.783821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.783993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.784025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.784205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.784238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.784431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.784463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.784726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.784758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.784927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.784960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.785130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.785162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.785449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.785482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.785745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.785777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.785961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.785993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.786110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.786142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.786425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.786458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.786714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.786746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.786949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.787212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.787246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.787449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.787481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.787674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.787706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.787884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.787916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 Malloc0 00:36:55.242 [2024-12-16 22:42:44.788156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.788188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.788461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.788493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 [2024-12-16 22:42:44.788775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.788807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.242 [2024-12-16 22:42:44.789005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.242 [2024-12-16 22:42:44.789037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.242 qpair failed and we were unable to recover it. 00:36:55.242 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:55.243 [2024-12-16 22:42:44.789276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.789310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.789543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.789576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.243 [2024-12-16 22:42:44.789743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.789775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.243 [2024-12-16 22:42:44.790035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.790067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.790356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.790389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.790637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.790670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.790850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.790882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.790986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.791214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.791247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.791446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.791477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.791607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.791638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.791804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.791836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.792117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.792149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.792418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.792452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.792693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.792725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.792990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.793157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.793189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.793491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.793523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.793799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.793831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.794032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.794064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.794322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.794356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.794644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.794940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.794971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.795165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.795206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.795389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.795421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.795589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.795620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.795687] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:55.243 [2024-12-16 22:42:44.795833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.795866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.796033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.796065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.796325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.796357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.243 [2024-12-16 22:42:44.796553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.243 [2024-12-16 22:42:44.796585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.243 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.796777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.796809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.796938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.796970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.797106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.797138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.797343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.797376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.797543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.797575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.797848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.797880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.798112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.798144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.798389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.798423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.798589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.798620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.798811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.798842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.799015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.799047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.799311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.799345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.799581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.799613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.799848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.799879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.800082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.800119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.800384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.800417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.800591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.800623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.800824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.800855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.801116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.801148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.244 [2024-12-16 22:42:44.801265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.801298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.801469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.801502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:55.244 [2024-12-16 22:42:44.801696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.801728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.244 [2024-12-16 22:42:44.802012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.802044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.244 [2024-12-16 22:42:44.802240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.802273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.802441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.802474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.802662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.802694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.802906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.802938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.803054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.803086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.803259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.803293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.803417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.803449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.803620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.803652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.803911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.803944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.804133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.804164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.804351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.804389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.804576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.804608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.804879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.804911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.244 qpair failed and we were unable to recover it. 00:36:55.244 [2024-12-16 22:42:44.805202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.244 [2024-12-16 22:42:44.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.805481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.805513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.805810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.805842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe1a0000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.806066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.806100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.806357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.806392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.806683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.806715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.806885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.806917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.807173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.807216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.807423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.807455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.807644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.807676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.807858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.807889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.808055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.808087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.808352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.808385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.808588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.808620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.808802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.808834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.809025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.809057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.245 [2024-12-16 22:42:44.809329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.809363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.809492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.809524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:55.245 [2024-12-16 22:42:44.809783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.809815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.245 [2024-12-16 22:42:44.809947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.809979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.810225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.245 [2024-12-16 22:42:44.810259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.810498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.810530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.810793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.810825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.811008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.811040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.811162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.811202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.811448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.811480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.811786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.811818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.812069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.812101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.812402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.812436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.812605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.812637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.812811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.812844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.812959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.812991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.813174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.813216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.813355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.813387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.813579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.813612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.813726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.813758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.813901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.813934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.245 [2024-12-16 22:42:44.814064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.245 [2024-12-16 22:42:44.814096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.245 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.814371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.814405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.814612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.814644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.814757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.814789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.814964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.815002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.815264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.815299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.815470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.815503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.815691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.815723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.815943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.815976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.816216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.816250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.816516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.816548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.816754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.816786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.816892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.816924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.817190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.246 [2024-12-16 22:42:44.817234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.817497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.817530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:55.246 [2024-12-16 22:42:44.817742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.817775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.817883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.817921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.246 [2024-12-16 22:42:44.818118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.818151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.246 [2024-12-16 22:42:44.818344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.818378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.818549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.818580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.818765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.818798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.818989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.819022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.819314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.819483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.819515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.819684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.819716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.819996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.820029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.820221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.820373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.820406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.820520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.246 [2024-12-16 22:42:44.820552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe198000b90 with addr=10.0.0.2, port=4420 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.820678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.246 [2024-12-16 22:42:44.826426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.246 [2024-12-16 22:42:44.826553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.246 [2024-12-16 22:42:44.826601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.246 [2024-12-16 22:42:44.826624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.246 [2024-12-16 22:42:44.826644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.246 [2024-12-16 22:42:44.826696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.246 22:42:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 542072 00:36:55.246 [2024-12-16 22:42:44.836231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.246 [2024-12-16 22:42:44.836317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.246 [2024-12-16 22:42:44.836350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.246 [2024-12-16 22:42:44.836369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.246 [2024-12-16 22:42:44.836385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.246 [2024-12-16 22:42:44.836422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.246 qpair failed and we were unable to recover it. 00:36:55.246 [2024-12-16 22:42:44.846226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.846292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.846314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.846325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.846337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.846363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.247 [2024-12-16 22:42:44.856269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.856344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.856363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.856372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.856379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.856397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.247 [2024-12-16 22:42:44.866251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.866307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.866320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.866326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.866332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.866346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.247 [2024-12-16 22:42:44.876214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.876272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.876286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.876292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.876298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.876314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.247 [2024-12-16 22:42:44.886284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.886342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.886355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.886363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.886369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.886384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.247 [2024-12-16 22:42:44.896318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.896375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.896389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.896395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.896405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.896420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.247 [2024-12-16 22:42:44.906368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.247 [2024-12-16 22:42:44.906423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.247 [2024-12-16 22:42:44.906436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.247 [2024-12-16 22:42:44.906443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.247 [2024-12-16 22:42:44.906449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.247 [2024-12-16 22:42:44.906464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.247 qpair failed and we were unable to recover it. 00:36:55.526 [2024-12-16 22:42:44.916383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.526 [2024-12-16 22:42:44.916438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.526 [2024-12-16 22:42:44.916451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.526 [2024-12-16 22:42:44.916458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.526 [2024-12-16 22:42:44.916464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.916479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.926346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.926395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.926409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.926415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.926421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.926435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.936444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.936500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.936513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.936519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.936525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.936539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.946436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.946495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.946510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.946518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.946525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.946540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.956492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.956545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.956559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.956565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.956571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.956586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.966512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.966563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.966576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.966583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.966589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.966603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.976544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.976600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.976613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.976619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.976625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.976639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.986582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.527 [2024-12-16 22:42:44.986637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.527 [2024-12-16 22:42:44.986653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.527 [2024-12-16 22:42:44.986660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.527 [2024-12-16 22:42:44.986665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.527 [2024-12-16 22:42:44.986680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.527 qpair failed and we were unable to recover it. 00:36:55.527 [2024-12-16 22:42:44.996525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:44.996582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:44.996595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:44.996601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:44.996607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:44.996621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.006619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.006698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.006711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.006718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.006724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.006739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.016705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.016810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.016822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.016829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.016835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.016850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.026615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.026666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.026681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.026690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.026700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.026715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.036707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.036786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.036800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.036806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.036812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.036827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.046772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.046847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.046860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.046866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.046872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.046887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.056710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.056818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.056831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.056837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.056843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.056857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.066818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.066876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.066890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.066896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.066902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.066917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.076846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.528 [2024-12-16 22:42:45.076911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.528 [2024-12-16 22:42:45.076925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.528 [2024-12-16 22:42:45.076932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.528 [2024-12-16 22:42:45.076938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.528 [2024-12-16 22:42:45.076952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.528 qpair failed and we were unable to recover it. 00:36:55.528 [2024-12-16 22:42:45.086939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.086993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.087006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.087012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.087018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.087032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.096919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.096974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.096987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.096993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.096999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.097013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.106920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.107005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.107018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.107025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.107030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.107045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.116968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.117058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.117071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.117077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.117083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.117097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.126956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.127018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.127031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.127038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.127043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.127058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.137038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.137100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.137113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.137119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.137125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.137140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.147020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.147097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.147111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.147117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.147123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.147138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.157047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.529 [2024-12-16 22:42:45.157103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.529 [2024-12-16 22:42:45.157117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.529 [2024-12-16 22:42:45.157126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.529 [2024-12-16 22:42:45.157132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.529 [2024-12-16 22:42:45.157147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.529 qpair failed and we were unable to recover it. 00:36:55.529 [2024-12-16 22:42:45.167074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.530 [2024-12-16 22:42:45.167130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.530 [2024-12-16 22:42:45.167143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.530 [2024-12-16 22:42:45.167149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.530 [2024-12-16 22:42:45.167155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.530 [2024-12-16 22:42:45.167170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.530 qpair failed and we were unable to recover it. 00:36:55.530 [2024-12-16 22:42:45.177110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.530 [2024-12-16 22:42:45.177169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.530 [2024-12-16 22:42:45.177182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.530 [2024-12-16 22:42:45.177188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.530 [2024-12-16 22:42:45.177199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.530 [2024-12-16 22:42:45.177214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.530 qpair failed and we were unable to recover it. 00:36:55.530 [2024-12-16 22:42:45.187131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.530 [2024-12-16 22:42:45.187190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.530 [2024-12-16 22:42:45.187208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.530 [2024-12-16 22:42:45.187215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.530 [2024-12-16 22:42:45.187221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.530 [2024-12-16 22:42:45.187235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.530 qpair failed and we were unable to recover it. 00:36:55.530 [2024-12-16 22:42:45.197170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.530 [2024-12-16 22:42:45.197228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.530 [2024-12-16 22:42:45.197241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.530 [2024-12-16 22:42:45.197248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.530 [2024-12-16 22:42:45.197253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.530 [2024-12-16 22:42:45.197272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.530 qpair failed and we were unable to recover it. 00:36:55.530 [2024-12-16 22:42:45.207187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.530 [2024-12-16 22:42:45.207249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.530 [2024-12-16 22:42:45.207262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.530 [2024-12-16 22:42:45.207269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.530 [2024-12-16 22:42:45.207274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.530 [2024-12-16 22:42:45.207289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.530 qpair failed and we were unable to recover it. 00:36:55.530 [2024-12-16 22:42:45.217274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.530 [2024-12-16 22:42:45.217331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.530 [2024-12-16 22:42:45.217344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.530 [2024-12-16 22:42:45.217350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.530 [2024-12-16 22:42:45.217356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.530 [2024-12-16 22:42:45.217370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.530 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.227320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.227372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.227384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.227390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.227396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.227410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.237294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.237360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.237372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.237378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.237384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.237398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.247291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.247346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.247359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.247366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.247372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.247386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.257353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.257408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.257421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.257426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.257432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.257446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.267404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.267459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.267472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.267478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.267483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.267497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.277364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.277445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.277457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.277463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.277469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.277482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.287374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.287427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.287446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.287452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.287458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.287473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.297393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.297448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.297460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.297466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.297472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.297487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.307494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.307550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.307562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.307568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.307574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.307588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.317526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.317578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.317591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.317597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.317603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.317617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.327507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.327601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.327615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.327621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.327627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.327645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.337547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.337602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.337615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.337621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.337627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.337642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.347582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.347639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.347652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.347657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.347663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.347678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.357629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.357694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.357706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.794 [2024-12-16 22:42:45.357712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.794 [2024-12-16 22:42:45.357718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.794 [2024-12-16 22:42:45.357732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.794 qpair failed and we were unable to recover it. 00:36:55.794 [2024-12-16 22:42:45.367580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.794 [2024-12-16 22:42:45.367677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.794 [2024-12-16 22:42:45.367690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.367696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.367701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.367716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.377681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.377737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.377750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.377756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.377762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.377777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.387631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.387685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.387701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.387707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.387714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.387729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.397683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.397779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.397792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.397798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.397805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.397820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.407737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.407788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.407801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.407807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.407813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.407829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.417775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.417835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.417851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.417858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.417864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.417879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.427816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.427900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.427913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.427919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.427926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.427940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.437858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.437951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.437965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.437972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.437978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.437993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.447865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.447921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.447934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.447941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.447946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.447960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.457920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.457985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.457999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.458005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.458014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.458030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.467939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.467995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.468008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.468014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.468019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.468034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.477870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.477921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.477934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.477940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.477946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.477961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:55.795 [2024-12-16 22:42:45.487978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.795 [2024-12-16 22:42:45.488027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.795 [2024-12-16 22:42:45.488040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.795 [2024-12-16 22:42:45.488046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.795 [2024-12-16 22:42:45.488051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:55.795 [2024-12-16 22:42:45.488066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:55.795 qpair failed and we were unable to recover it. 00:36:56.055 [2024-12-16 22:42:45.498010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.055 [2024-12-16 22:42:45.498065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.055 [2024-12-16 22:42:45.498078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.055 [2024-12-16 22:42:45.498084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.055 [2024-12-16 22:42:45.498090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.055 [2024-12-16 22:42:45.498104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.055 qpair failed and we were unable to recover it. 00:36:56.055 [2024-12-16 22:42:45.508057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.055 [2024-12-16 22:42:45.508111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.055 [2024-12-16 22:42:45.508124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.055 [2024-12-16 22:42:45.508130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.055 [2024-12-16 22:42:45.508136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.055 [2024-12-16 22:42:45.508150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.055 qpair failed and we were unable to recover it. 00:36:56.055 [2024-12-16 22:42:45.518083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.055 [2024-12-16 22:42:45.518138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.055 [2024-12-16 22:42:45.518151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.055 [2024-12-16 22:42:45.518157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.055 [2024-12-16 22:42:45.518162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.055 [2024-12-16 22:42:45.518177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.055 qpair failed and we were unable to recover it. 00:36:56.055 [2024-12-16 22:42:45.528090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.055 [2024-12-16 22:42:45.528143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.055 [2024-12-16 22:42:45.528155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.055 [2024-12-16 22:42:45.528162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.055 [2024-12-16 22:42:45.528167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.055 [2024-12-16 22:42:45.528182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.538104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.538158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.538170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.538176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.538182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.538201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.548127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.548194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.548210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.548216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.548222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.548237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.558172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.558225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.558238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.558245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.558250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.558265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.568200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.568252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.568265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.568272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.568278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.568292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.578246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.578303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.578316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.578322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.578328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.578342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.588266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.588323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.588336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.588346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.588352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.588366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.598226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.598283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.598296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.598302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.598308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.598322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.608308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.608368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.608380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.608386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.608392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.608407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.618365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.618427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.618440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.618446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.618452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.618466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.628427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.628510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.628523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.628529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.628534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.628548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.638480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.638544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.638557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.638563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.638569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.638583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.648484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.648539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.648552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.648559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.648565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.648579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.658510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.658567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.658579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.658586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.658591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.658606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.668532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.668588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.668600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.668606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.668612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.668625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.678529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.678581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.678594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.678601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.678606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.678621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.688547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.688599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.688611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.688618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.688623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.056 [2024-12-16 22:42:45.688637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.056 qpair failed and we were unable to recover it. 00:36:56.056 [2024-12-16 22:42:45.698562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.056 [2024-12-16 22:42:45.698629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.056 [2024-12-16 22:42:45.698642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.056 [2024-12-16 22:42:45.698648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.056 [2024-12-16 22:42:45.698654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.057 [2024-12-16 22:42:45.698668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.057 qpair failed and we were unable to recover it. 00:36:56.057 [2024-12-16 22:42:45.708606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.057 [2024-12-16 22:42:45.708660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.057 [2024-12-16 22:42:45.708672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.057 [2024-12-16 22:42:45.708679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.057 [2024-12-16 22:42:45.708684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.057 [2024-12-16 22:42:45.708699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.057 qpair failed and we were unable to recover it. 00:36:56.057 [2024-12-16 22:42:45.718638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.057 [2024-12-16 22:42:45.718694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.057 [2024-12-16 22:42:45.718706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.057 [2024-12-16 22:42:45.718715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.057 [2024-12-16 22:42:45.718721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.057 [2024-12-16 22:42:45.718736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.057 qpair failed and we were unable to recover it. 00:36:56.057 [2024-12-16 22:42:45.728661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.057 [2024-12-16 22:42:45.728762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.057 [2024-12-16 22:42:45.728775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.057 [2024-12-16 22:42:45.728781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.057 [2024-12-16 22:42:45.728786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.057 [2024-12-16 22:42:45.728801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.057 qpair failed and we were unable to recover it. 00:36:56.057 [2024-12-16 22:42:45.738685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.057 [2024-12-16 22:42:45.738788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.057 [2024-12-16 22:42:45.738800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.057 [2024-12-16 22:42:45.738806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.057 [2024-12-16 22:42:45.738812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.057 [2024-12-16 22:42:45.738826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.057 qpair failed and we were unable to recover it. 00:36:56.057 [2024-12-16 22:42:45.748743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.057 [2024-12-16 22:42:45.748813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.057 [2024-12-16 22:42:45.748850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.057 [2024-12-16 22:42:45.748861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.057 [2024-12-16 22:42:45.748867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.057 [2024-12-16 22:42:45.748893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.057 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.758745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.758796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.758810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.758816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.758822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.758840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.768783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.768871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.768883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.768889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.768895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.768909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.778812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.778867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.778879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.778885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.778891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.778906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.788840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.788898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.788911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.788917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.788923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.788938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.798875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.798935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.798948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.798954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.798960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.798974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.808877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.808934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.808947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.808953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.808959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.808974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.818901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.818954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.818967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.818973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.818979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.818993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.828967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.829020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.829032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.829038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.829044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.829059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.838977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.839029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.839042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.317 [2024-12-16 22:42:45.839047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.317 [2024-12-16 22:42:45.839054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.317 [2024-12-16 22:42:45.839068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.317 qpair failed and we were unable to recover it. 00:36:56.317 [2024-12-16 22:42:45.849046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.317 [2024-12-16 22:42:45.849096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.317 [2024-12-16 22:42:45.849111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.849118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.849123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.849137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.859022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.859076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.859089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.859095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.859101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.859115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.869061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.869112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.869124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.869131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.869136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.869150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.879074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.879125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.879138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.879144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.879150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.879165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.889103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.889154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.889167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.889173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.889179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.889199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.899151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.899208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.899221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.899227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.899232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.899247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.909155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.909212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.909225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.909231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.909238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.909252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.919188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.919249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.919262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.919269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.919275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.919289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.929267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.929333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.929345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.929352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.929357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.929372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.939251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.939309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.939322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.939329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.939337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.939353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.949280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.949335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.949347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.949353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.949359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.949374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.959303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.959352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.959364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.959370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.959376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.959390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.969354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.969403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.969414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.969420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.969426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.318 [2024-12-16 22:42:45.969440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.318 qpair failed and we were unable to recover it. 00:36:56.318 [2024-12-16 22:42:45.979372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.318 [2024-12-16 22:42:45.979425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.318 [2024-12-16 22:42:45.979441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.318 [2024-12-16 22:42:45.979448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.318 [2024-12-16 22:42:45.979453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.319 [2024-12-16 22:42:45.979467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.319 qpair failed and we were unable to recover it. 00:36:56.319 [2024-12-16 22:42:45.989383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.319 [2024-12-16 22:42:45.989438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.319 [2024-12-16 22:42:45.989451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.319 [2024-12-16 22:42:45.989457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.319 [2024-12-16 22:42:45.989463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.319 [2024-12-16 22:42:45.989477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.319 qpair failed and we were unable to recover it. 00:36:56.319 [2024-12-16 22:42:45.999406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.319 [2024-12-16 22:42:45.999456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.319 [2024-12-16 22:42:45.999469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.319 [2024-12-16 22:42:45.999475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.319 [2024-12-16 22:42:45.999481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.319 [2024-12-16 22:42:45.999495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.319 qpair failed and we were unable to recover it. 00:36:56.319 [2024-12-16 22:42:46.009483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.319 [2024-12-16 22:42:46.009547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.319 [2024-12-16 22:42:46.009560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.319 [2024-12-16 22:42:46.009567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.319 [2024-12-16 22:42:46.009573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.319 [2024-12-16 22:42:46.009588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.319 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.019414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.019470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.019482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.019488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.019497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.019512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.029516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.029571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.029583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.029590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.029595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.029610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.039530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.039616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.039629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.039635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.039640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.039655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.049591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.049661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.049674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.049681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.049687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.049702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.059623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.059726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.059738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.059745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.059750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.059764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.069629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.069697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.069709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.069715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.069721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.069735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.079649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.079702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.079716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.079722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.079728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.079742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.089681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.089731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.089744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.089750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.089756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.089771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.099749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.099804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.099816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.099822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.099828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.099842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.109738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.109793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.109811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.109818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.109823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.109838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.119756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.119804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.119817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.119823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.119829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.119843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.129780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.129836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.129848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.129854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.129860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.129874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.139796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.139852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.139865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.139871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.139877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.139891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.149846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.149903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.149915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.149924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.149929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.149944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.159859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.159906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.159918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.159924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.159930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.159944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.169891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.169943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.169955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.169961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.169967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.169981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.179907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.179961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.179973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.179979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.179985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.179999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.189957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.190007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.190020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.190026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.190032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.190047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.578 [2024-12-16 22:42:46.199979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.578 [2024-12-16 22:42:46.200027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.578 [2024-12-16 22:42:46.200041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.578 [2024-12-16 22:42:46.200047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.578 [2024-12-16 22:42:46.200053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.578 [2024-12-16 22:42:46.200067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.578 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.210049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.210141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.210155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.210162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.210167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.210182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.220040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.220098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.220111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.220118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.220123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.220137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.230062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.230114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.230127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.230133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.230139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.230153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.240111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.240171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.240184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.240190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.240200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.240214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.250111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.250167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.250180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.250186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.250195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.250210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.260153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.260211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.260224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.260231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.260236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.260251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.579 [2024-12-16 22:42:46.270201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.579 [2024-12-16 22:42:46.270259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.579 [2024-12-16 22:42:46.270272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.579 [2024-12-16 22:42:46.270278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.579 [2024-12-16 22:42:46.270284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.579 [2024-12-16 22:42:46.270299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.579 qpair failed and we were unable to recover it. 00:36:56.839 [2024-12-16 22:42:46.280208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.839 [2024-12-16 22:42:46.280263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.839 [2024-12-16 22:42:46.280276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.839 [2024-12-16 22:42:46.280285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.839 [2024-12-16 22:42:46.280291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.839 [2024-12-16 22:42:46.280306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.839 qpair failed and we were unable to recover it. 00:36:56.839 [2024-12-16 22:42:46.290236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.839 [2024-12-16 22:42:46.290292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.839 [2024-12-16 22:42:46.290305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.839 [2024-12-16 22:42:46.290311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.839 [2024-12-16 22:42:46.290317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.839 [2024-12-16 22:42:46.290331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.839 qpair failed and we were unable to recover it. 00:36:56.839 [2024-12-16 22:42:46.300272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.839 [2024-12-16 22:42:46.300328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.839 [2024-12-16 22:42:46.300341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.839 [2024-12-16 22:42:46.300347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.839 [2024-12-16 22:42:46.300353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.839 [2024-12-16 22:42:46.300367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.839 qpair failed and we were unable to recover it. 00:36:56.839 [2024-12-16 22:42:46.310331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.839 [2024-12-16 22:42:46.310385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.839 [2024-12-16 22:42:46.310398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.839 [2024-12-16 22:42:46.310404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.839 [2024-12-16 22:42:46.310410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.839 [2024-12-16 22:42:46.310424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.839 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.320321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.320375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.320388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.320394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.320400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.320417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.330399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.330460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.330473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.330479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.330485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.330500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.340359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.340421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.340434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.340440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.340446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.340460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.350347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.350407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.350420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.350427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.350432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.350446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.360460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.360525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.360538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.360544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.360549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.360564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.370462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.370513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.370526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.370532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.370538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.370552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.380499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.380592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.380604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.380610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.380616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.380630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.390552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.390602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.390615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.390621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.390627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.390642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.400541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.400596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.400609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.400615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.400621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.400635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.410564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.410612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.410628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.410635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.410640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.410654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.420601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.420657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.420670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.420676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.420681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.420695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.430642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.840 [2024-12-16 22:42:46.430743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.840 [2024-12-16 22:42:46.430755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.840 [2024-12-16 22:42:46.430763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.840 [2024-12-16 22:42:46.430769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.840 [2024-12-16 22:42:46.430785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.840 qpair failed and we were unable to recover it. 00:36:56.840 [2024-12-16 22:42:46.440638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.440687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.440700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.440705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.440711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.440726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.450682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.450730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.450743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.450749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.450757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.450772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.460643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.460704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.460717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.460723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.460729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.460744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.470682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.470731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.470744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.470753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.470759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.470773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.480706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.480770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.480783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.480789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.480795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.480810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.490820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.490875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.490888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.490894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.490900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.490914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.500813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.500902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.500915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.500921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.500926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.500942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.510920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.511011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.511024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.511030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.511036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.511050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.520919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.520989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.521001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.521007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.521013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.521027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:56.841 [2024-12-16 22:42:46.530855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.841 [2024-12-16 22:42:46.530908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.841 [2024-12-16 22:42:46.530920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.841 [2024-12-16 22:42:46.530926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.841 [2024-12-16 22:42:46.530932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:56.841 [2024-12-16 22:42:46.530946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:56.841 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.540944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.541003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.541019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.541025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.541031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.541046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.550970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.551026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.551039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.551046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.551052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.551066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.560981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.561033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.561046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.561052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.561058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.561072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.570966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.571062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.571075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.571081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.571087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.571101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.581080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.581133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.581147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.581153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.581162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.581176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.591080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.591134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.591147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.591153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.591159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.591173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.601055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.601105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.601118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.601125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.601131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.601145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.611127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.611180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.611198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.611205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.611210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.611224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.621119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.621174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.621186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.621197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.621203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.102 [2024-12-16 22:42:46.621218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.102 qpair failed and we were unable to recover it. 00:36:57.102 [2024-12-16 22:42:46.631254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.102 [2024-12-16 22:42:46.631323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.102 [2024-12-16 22:42:46.631336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.102 [2024-12-16 22:42:46.631342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.102 [2024-12-16 22:42:46.631348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.631363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.641232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.641280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.641293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.641299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.641305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.641320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.651266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.651319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.651331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.651338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.651343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.651358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.661280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.661362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.661374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.661380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.661386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.661400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.671323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.671379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.671395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.671402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.671408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.671422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.681330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.681395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.681407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.681414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.681419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.681433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.691380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.691427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.691440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.691446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.691452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.691467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.701371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.701424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.701437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.701443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.701449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.701464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.711394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.711446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.711459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.711467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.711473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.711488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.721475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.721533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.721545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.721551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.721557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.721572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.731526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.731579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.731592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.731598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.731603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.731618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.741580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.741636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.741648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.741654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.741660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.741674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.751508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.751561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.751574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.751580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.751586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.751600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.761601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.103 [2024-12-16 22:42:46.761656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.103 [2024-12-16 22:42:46.761669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.103 [2024-12-16 22:42:46.761675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.103 [2024-12-16 22:42:46.761681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.103 [2024-12-16 22:42:46.761695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.103 qpair failed and we were unable to recover it. 00:36:57.103 [2024-12-16 22:42:46.771555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.104 [2024-12-16 22:42:46.771628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.104 [2024-12-16 22:42:46.771641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.104 [2024-12-16 22:42:46.771647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.104 [2024-12-16 22:42:46.771653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.104 [2024-12-16 22:42:46.771667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.104 qpair failed and we were unable to recover it. 00:36:57.104 [2024-12-16 22:42:46.781587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.104 [2024-12-16 22:42:46.781646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.104 [2024-12-16 22:42:46.781659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.104 [2024-12-16 22:42:46.781665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.104 [2024-12-16 22:42:46.781671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.104 [2024-12-16 22:42:46.781685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.104 qpair failed and we were unable to recover it. 00:36:57.104 [2024-12-16 22:42:46.791613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.104 [2024-12-16 22:42:46.791691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.104 [2024-12-16 22:42:46.791704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.104 [2024-12-16 22:42:46.791711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.104 [2024-12-16 22:42:46.791716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.104 [2024-12-16 22:42:46.791730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.104 qpair failed and we were unable to recover it. 00:36:57.104 [2024-12-16 22:42:46.801638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.104 [2024-12-16 22:42:46.801687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.104 [2024-12-16 22:42:46.801700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.104 [2024-12-16 22:42:46.801707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.104 [2024-12-16 22:42:46.801712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.104 [2024-12-16 22:42:46.801726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.104 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.811707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.811757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.811770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.811776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.811782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.811796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.821718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.821773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.821785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.821792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.821797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.821812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.831790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.831849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.831862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.831869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.831874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.831889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.841799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.841855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.841868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.841879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.841886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.841900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.851777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.851832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.851844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.851851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.851856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.851871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.861854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.861949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.861962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.861969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.861975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.861989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.871886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.871956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.871969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.871975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.871981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.871996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.881915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.881965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.881978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.881984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.881990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.882007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.891969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.892036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.892049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.892055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.892061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.892075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.901995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.902050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.902063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.902069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.902075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.902089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.912022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.912074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.912086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.912092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.912098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.912112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.922029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.922082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.922095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.922102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.922109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.922123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.932061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.932114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.932127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.932133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.932139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.932153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.364 [2024-12-16 22:42:46.942107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.364 [2024-12-16 22:42:46.942163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.364 [2024-12-16 22:42:46.942175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.364 [2024-12-16 22:42:46.942182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.364 [2024-12-16 22:42:46.942188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.364 [2024-12-16 22:42:46.942209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.364 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:46.952118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:46.952172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:46.952184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:46.952193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:46.952200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:46.952214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:46.962160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:46.962213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:46.962226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:46.962232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:46.962238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:46.962252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:46.972210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:46.972259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:46.972275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:46.972281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:46.972287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:46.972301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:46.982233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:46.982294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:46.982307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:46.982313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:46.982319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:46.982333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:46.992236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:46.992314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:46.992327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:46.992333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:46.992339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:46.992353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.002275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.002349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.002362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.002368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.002374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.002389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.012305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.012359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.012372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.012379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.012388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.012402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.022329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.022386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.022399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.022405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.022410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.022425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.032370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.032426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.032438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.032444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.032450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.032465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.042382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.042430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.042442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.042448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.042454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.042469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.052334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.052392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.052404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.052410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.052416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.052430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.365 [2024-12-16 22:42:47.062452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.365 [2024-12-16 22:42:47.062510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.365 [2024-12-16 22:42:47.062523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.365 [2024-12-16 22:42:47.062529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.365 [2024-12-16 22:42:47.062535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.365 [2024-12-16 22:42:47.062550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.365 qpair failed and we were unable to recover it. 00:36:57.625 [2024-12-16 22:42:47.072475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.625 [2024-12-16 22:42:47.072535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.072548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.072554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.072560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.072575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.082520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.082594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.082607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.082613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.082619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.082633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.092529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.092577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.092590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.092596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.092601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.092616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.102570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.102621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.102637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.102644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.102649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.102663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.112594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.112645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.112657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.112664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.112669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.112684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.122600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.122690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.122702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.122708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.122714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.122729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.132632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.132684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.132696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.132702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.132708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.132722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.142704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.142777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.142789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.142796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.142804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.142818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.152702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.152754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.152767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.152773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.152779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.152793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.162726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.162777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.162790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.162796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.162801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.162815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.172806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.172864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.172876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.172882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.172888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.172903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.182823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.182884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.182896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.182902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.182908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.182922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.192817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.192872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.192885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.626 [2024-12-16 22:42:47.192891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.626 [2024-12-16 22:42:47.192897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.626 [2024-12-16 22:42:47.192912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.626 qpair failed and we were unable to recover it. 00:36:57.626 [2024-12-16 22:42:47.202834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.626 [2024-12-16 22:42:47.202890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.626 [2024-12-16 22:42:47.202902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.202908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.202914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.202929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.212866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.212914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.212927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.212933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.212939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.212954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.222904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.222959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.222972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.222979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.222985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.222999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.232868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.232964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.232980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.232987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.232992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.233006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.242968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.243020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.243033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.243039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.243045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.243059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.252978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.253026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.253038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.253044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.253050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.253063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.263015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.263069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.263082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.263088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.263094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.263108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.273053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.273106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.273119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.273129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.273135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.273149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.283060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.283107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.283120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.283126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.283132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.283146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.293090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.293142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.293154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.293160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.293166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.293180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.303125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.303183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.303201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.303207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.303213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.303228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.313147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.313200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.313212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.313218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.313224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.313242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.627 [2024-12-16 22:42:47.323202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.627 [2024-12-16 22:42:47.323266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.627 [2024-12-16 22:42:47.323279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.627 [2024-12-16 22:42:47.323285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.627 [2024-12-16 22:42:47.323291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.627 [2024-12-16 22:42:47.323305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.627 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.333222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.333272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.333286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.333292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.333298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.333313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.343239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.343295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.343308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.343314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.343320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.343335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.353232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.353287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.353302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.353309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.353315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.353331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.363300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.363357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.363370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.363377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.363382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.363397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.373345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.373398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.373411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.373418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.373423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.373438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.383371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.383436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.383448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.383454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.383460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.383474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.393391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.393446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.393459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.393465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.393471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.393486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.403452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.403514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.403526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.403535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.403541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.403556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.413419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.413509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.413524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.413533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.413541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.413557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.423487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.423541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.423553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.423559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.888 [2024-12-16 22:42:47.423565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.888 [2024-12-16 22:42:47.423579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.888 qpair failed and we were unable to recover it. 00:36:57.888 [2024-12-16 22:42:47.433534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.888 [2024-12-16 22:42:47.433605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.888 [2024-12-16 22:42:47.433618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.888 [2024-12-16 22:42:47.433624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.433630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.433644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.443519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.443570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.443583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.443589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.443594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.443612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.453553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.453606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.453618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.453624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.453630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.453645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.463611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.463673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.463685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.463691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.463697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.463711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.473612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.473695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.473708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.473714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.473720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.473735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.483636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.483689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.483702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.483708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.483714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.483728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.493746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.493830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.493843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.493850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.493855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.493869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.503716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.503773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.503786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.503792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.503798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.503812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.513732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.513813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.513825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.513831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.513837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.513851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.523751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.523807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.523820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.523826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.523832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.523846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.533818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.533871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.533889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.533896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.533902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.533916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.543825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.543882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.543898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.543905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.543911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.543927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.553809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.553877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.553891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.553898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.553904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.553919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.889 [2024-12-16 22:42:47.563862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.889 [2024-12-16 22:42:47.563915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.889 [2024-12-16 22:42:47.563928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.889 [2024-12-16 22:42:47.563934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.889 [2024-12-16 22:42:47.563940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.889 [2024-12-16 22:42:47.563954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.889 qpair failed and we were unable to recover it. 00:36:57.890 [2024-12-16 22:42:47.573837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.890 [2024-12-16 22:42:47.573891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.890 [2024-12-16 22:42:47.573904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.890 [2024-12-16 22:42:47.573911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.890 [2024-12-16 22:42:47.573920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.890 [2024-12-16 22:42:47.573935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.890 qpair failed and we were unable to recover it. 00:36:57.890 [2024-12-16 22:42:47.583966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.890 [2024-12-16 22:42:47.584024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.890 [2024-12-16 22:42:47.584038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.890 [2024-12-16 22:42:47.584044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.890 [2024-12-16 22:42:47.584050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:57.890 [2024-12-16 22:42:47.584065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:57.890 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.594022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.594083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.594098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.594104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.594110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.594125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.604006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.604058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.604071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.604077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.604083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.604098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.614019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.614078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.614091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.614098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.614103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.614118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.624067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.624123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.624136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.624142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.624148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.624162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.634079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.634164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.634176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.634182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.634188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.634208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.644187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.644255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.644268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.644274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.644280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.644294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.654115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.654202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.654215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.654222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.654228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.654242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.664130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.150 [2024-12-16 22:42:47.664187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.150 [2024-12-16 22:42:47.664208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.150 [2024-12-16 22:42:47.664214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.150 [2024-12-16 22:42:47.664220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.150 [2024-12-16 22:42:47.664235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.150 qpair failed and we were unable to recover it. 00:36:58.150 [2024-12-16 22:42:47.674234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.674295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.674308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.674314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.674320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.674335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.684221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.684272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.684285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.684291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.684296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.684311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.694182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.694242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.694256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.694262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.694268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.694283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.704295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.704350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.704363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.704369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.704378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.704392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.714321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.714377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.714389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.714396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.714401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.714416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.724364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.724414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.724426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.724432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.724438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.724452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.734359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.734411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.734424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.734430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.734436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.734450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.744448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.744506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.744519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.744525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.744531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.744545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.754433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.754533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.754545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.754551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.754557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.754571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.764477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.764524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.764536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.764543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.764548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.764563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.774458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.774504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.774516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.774523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.774528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.774543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.784561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.784623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.784636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.784642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.784648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.784663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.794553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.794615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.794641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.151 [2024-12-16 22:42:47.794649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.151 [2024-12-16 22:42:47.794655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.151 [2024-12-16 22:42:47.794675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.151 qpair failed and we were unable to recover it. 00:36:58.151 [2024-12-16 22:42:47.804574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.151 [2024-12-16 22:42:47.804622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.151 [2024-12-16 22:42:47.804635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.152 [2024-12-16 22:42:47.804642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.152 [2024-12-16 22:42:47.804647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.152 [2024-12-16 22:42:47.804662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.152 qpair failed and we were unable to recover it. 00:36:58.152 [2024-12-16 22:42:47.814627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.152 [2024-12-16 22:42:47.814689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.152 [2024-12-16 22:42:47.814702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.152 [2024-12-16 22:42:47.814708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.152 [2024-12-16 22:42:47.814714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.152 [2024-12-16 22:42:47.814728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.152 qpair failed and we were unable to recover it. 00:36:58.152 [2024-12-16 22:42:47.824631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.152 [2024-12-16 22:42:47.824686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.152 [2024-12-16 22:42:47.824699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.152 [2024-12-16 22:42:47.824705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.152 [2024-12-16 22:42:47.824710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.152 [2024-12-16 22:42:47.824725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.152 qpair failed and we were unable to recover it. 00:36:58.152 [2024-12-16 22:42:47.834667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.152 [2024-12-16 22:42:47.834722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.152 [2024-12-16 22:42:47.834735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.152 [2024-12-16 22:42:47.834745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.152 [2024-12-16 22:42:47.834751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.152 [2024-12-16 22:42:47.834765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.152 qpair failed and we were unable to recover it. 00:36:58.152 [2024-12-16 22:42:47.844676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.152 [2024-12-16 22:42:47.844747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.152 [2024-12-16 22:42:47.844759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.152 [2024-12-16 22:42:47.844765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.152 [2024-12-16 22:42:47.844771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.152 [2024-12-16 22:42:47.844785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.152 qpair failed and we were unable to recover it. 00:36:58.412 [2024-12-16 22:42:47.854697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.412 [2024-12-16 22:42:47.854779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.412 [2024-12-16 22:42:47.854791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.412 [2024-12-16 22:42:47.854797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.412 [2024-12-16 22:42:47.854802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.412 [2024-12-16 22:42:47.854817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.412 qpair failed and we were unable to recover it. 00:36:58.412 [2024-12-16 22:42:47.864765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.412 [2024-12-16 22:42:47.864832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.412 [2024-12-16 22:42:47.864844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.412 [2024-12-16 22:42:47.864850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.412 [2024-12-16 22:42:47.864856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.412 [2024-12-16 22:42:47.864871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.412 qpair failed and we were unable to recover it. 00:36:58.412 [2024-12-16 22:42:47.874769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.412 [2024-12-16 22:42:47.874824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.412 [2024-12-16 22:42:47.874838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.412 [2024-12-16 22:42:47.874844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.412 [2024-12-16 22:42:47.874849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.412 [2024-12-16 22:42:47.874867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.412 qpair failed and we were unable to recover it. 00:36:58.412 [2024-12-16 22:42:47.884793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.412 [2024-12-16 22:42:47.884876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.412 [2024-12-16 22:42:47.884889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.412 [2024-12-16 22:42:47.884895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.412 [2024-12-16 22:42:47.884900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.412 [2024-12-16 22:42:47.884915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.412 qpair failed and we were unable to recover it. 00:36:58.412 [2024-12-16 22:42:47.894848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.412 [2024-12-16 22:42:47.894900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.412 [2024-12-16 22:42:47.894914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.412 [2024-12-16 22:42:47.894922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.412 [2024-12-16 22:42:47.894929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.412 [2024-12-16 22:42:47.894946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.412 qpair failed and we were unable to recover it. 00:36:58.412 [2024-12-16 22:42:47.904853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.412 [2024-12-16 22:42:47.904921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.412 [2024-12-16 22:42:47.904936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.412 [2024-12-16 22:42:47.904944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.412 [2024-12-16 22:42:47.904952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.412 [2024-12-16 22:42:47.904968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.914902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.914955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.914968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.914975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.914980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.914995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.924818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.924874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.924887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.924893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.924899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.924913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.934859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.934933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.934946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.934952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.934957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.934971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.944968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.945038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.945050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.945056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.945062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.945077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.954955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.955051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.955064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.955071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.955076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.955091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.965013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.965066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.965079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.965088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.965093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.965108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.975016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.975068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.975082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.975088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.975093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.975108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.985044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.985105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.985117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.985124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.985130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.985144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:47.995113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:47.995168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:47.995181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:47.995187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:47.995198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:47.995213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:48.005104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:48.005155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:48.005168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:48.005174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:48.005180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:48.005203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:48.015139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:48.015196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:48.015210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:48.015216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:48.015222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:48.015237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:48.025183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:48.025280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:48.025293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:48.025300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:48.025305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:48.025320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:48.035142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:48.035212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.413 [2024-12-16 22:42:48.035225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.413 [2024-12-16 22:42:48.035231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.413 [2024-12-16 22:42:48.035236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.413 [2024-12-16 22:42:48.035251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.413 qpair failed and we were unable to recover it. 00:36:58.413 [2024-12-16 22:42:48.045271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.413 [2024-12-16 22:42:48.045321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.045333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.045339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.045345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.045359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.414 [2024-12-16 22:42:48.055245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.414 [2024-12-16 22:42:48.055300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.055313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.055319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.055325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.055339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.414 [2024-12-16 22:42:48.065300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.414 [2024-12-16 22:42:48.065355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.065368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.065375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.065380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.065395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.414 [2024-12-16 22:42:48.075289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.414 [2024-12-16 22:42:48.075381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.075394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.075400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.075406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.075420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.414 [2024-12-16 22:42:48.085265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.414 [2024-12-16 22:42:48.085336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.085349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.085355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.085360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.085375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.414 [2024-12-16 22:42:48.095370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.414 [2024-12-16 22:42:48.095462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.095477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.095484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.095489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.095503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.414 [2024-12-16 22:42:48.105428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.414 [2024-12-16 22:42:48.105518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.414 [2024-12-16 22:42:48.105530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.414 [2024-12-16 22:42:48.105536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.414 [2024-12-16 22:42:48.105542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.414 [2024-12-16 22:42:48.105556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.414 qpair failed and we were unable to recover it. 00:36:58.674 [2024-12-16 22:42:48.115492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.674 [2024-12-16 22:42:48.115552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.674 [2024-12-16 22:42:48.115565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.674 [2024-12-16 22:42:48.115572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.674 [2024-12-16 22:42:48.115577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.674 [2024-12-16 22:42:48.115591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.674 qpair failed and we were unable to recover it. 00:36:58.674 [2024-12-16 22:42:48.125468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.674 [2024-12-16 22:42:48.125564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.674 [2024-12-16 22:42:48.125576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.674 [2024-12-16 22:42:48.125582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.674 [2024-12-16 22:42:48.125587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.674 [2024-12-16 22:42:48.125601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.674 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.135480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.135530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.135542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.135548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.135557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.135571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.145529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.145583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.145596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.145602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.145608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.145623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.155487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.155540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.155553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.155559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.155565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.155580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.165515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.165610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.165623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.165629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.165636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.165650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.175618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.175685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.175698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.175704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.175710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.175725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.185620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.185678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.185690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.185697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.185702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.185717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.195683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.195733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.195746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.195752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.195758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.195773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.205718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.205768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.205780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.205786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.205792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.205806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.215646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.215698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.215711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.215717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.215723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.215737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.225780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.225835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.225853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.225860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.225866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.225880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.235739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.235795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.235807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.235814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.235820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.235835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.245803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.245877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.245890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.245897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.245903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.245917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.255789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.675 [2024-12-16 22:42:48.255843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.675 [2024-12-16 22:42:48.255856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.675 [2024-12-16 22:42:48.255863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.675 [2024-12-16 22:42:48.255868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.675 [2024-12-16 22:42:48.255883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.675 qpair failed and we were unable to recover it. 00:36:58.675 [2024-12-16 22:42:48.265821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.265875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.265890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.265897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.265906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.265921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.275954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.276008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.276021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.276027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.276033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.276047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.285859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.285912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.285925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.285931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.285937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.285950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.295881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.295944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.295956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.295962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.295968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.295982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.305999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.306070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.306083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.306089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.306095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.306110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.316039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.316107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.316120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.316126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.316132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.316146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.326038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.326093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.326105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.326112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.326118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.326132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.336100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.336154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.336167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.336173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.336179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.336197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.346115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.346195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.346208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.346215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.346221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.346235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.356146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.356201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.356217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.356223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.356229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.356243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.676 [2024-12-16 22:42:48.366157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.676 [2024-12-16 22:42:48.366219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.676 [2024-12-16 22:42:48.366232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.676 [2024-12-16 22:42:48.366239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.676 [2024-12-16 22:42:48.366244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.676 [2024-12-16 22:42:48.366259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.676 qpair failed and we were unable to recover it. 00:36:58.936 [2024-12-16 22:42:48.376212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.936 [2024-12-16 22:42:48.376287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.936 [2024-12-16 22:42:48.376300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.376306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.376312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.376326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.386224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.386277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.386290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.386296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.386302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.386316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.396252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.396350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.396365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.396375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.396382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.396399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.406272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.406326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.406338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.406344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.406350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.406364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.416295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.416357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.416370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.416376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.416381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.416395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.426391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.426446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.426459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.426465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.426471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.426485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.436357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.436411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.436423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.436429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.436436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.436453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.446377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.446433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.446446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.446452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.446458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.446472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.456406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.456460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.456472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.456478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.456484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.456498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.466460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.466518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.466531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.466537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.466543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.466557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.476476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.476528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.476540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.476546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.476552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.476566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.486492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.486545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.486558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.486565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.486570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.486585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.496517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.496570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.496583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.496589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.496595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.937 [2024-12-16 22:42:48.496610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.937 qpair failed and we were unable to recover it. 00:36:58.937 [2024-12-16 22:42:48.506549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.937 [2024-12-16 22:42:48.506603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.937 [2024-12-16 22:42:48.506615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.937 [2024-12-16 22:42:48.506621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.937 [2024-12-16 22:42:48.506628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.506641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.516602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.516658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.516670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.516677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.516683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.516697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.526668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.526727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.526740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.526749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.526755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.526769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.536644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.536696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.536708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.536714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.536720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.536734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.546674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.546728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.546741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.546748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.546753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.546767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.556705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.556768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.556780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.556787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.556792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.556806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.566725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.566783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.566796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.566802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.566808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.566825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.576756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.576806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.576819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.576825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.576831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.576845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.586794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.586849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.586862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.586868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.586874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.586888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.596832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.596899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.596911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.596917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.596923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.596936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.606846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.606897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.606910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.606916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.606923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.606937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.616843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.616917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.616931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.616937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.616943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.616958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.626915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.626993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.627006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.627013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.627018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.627032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:58.938 [2024-12-16 22:42:48.636991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:58.938 [2024-12-16 22:42:48.637057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:58.938 [2024-12-16 22:42:48.637070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:58.938 [2024-12-16 22:42:48.637077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:58.938 [2024-12-16 22:42:48.637083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:58.938 [2024-12-16 22:42:48.637097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:58.938 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.646967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.647034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.647047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.647053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.647059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.647073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.656960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.657059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.657075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.657081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.657087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.657101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.667012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.667065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.667078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.667084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.667090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.667104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.677077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.677133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.677147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.677153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.677159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.677174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.687083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.687135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.687148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.687154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.687159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.687174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.697154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.697221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.697234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.697240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.697249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.697263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.707155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.707212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.199 [2024-12-16 22:42:48.707224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.199 [2024-12-16 22:42:48.707230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.199 [2024-12-16 22:42:48.707236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.199 [2024-12-16 22:42:48.707250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.199 qpair failed and we were unable to recover it. 00:36:59.199 [2024-12-16 22:42:48.717222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.199 [2024-12-16 22:42:48.717285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.717297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.717304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.717310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.717324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.727205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.727259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.727271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.727277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.727282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.727296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.737163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.737259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.737272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.737278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.737283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.737298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.747246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.747302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.747315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.747321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.747327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.747341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.757300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.757353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.757366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.757372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.757378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.757392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.767314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.767368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.767380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.767386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.767392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.767407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.777369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.777428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.777441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.777447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.777453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.777467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.787386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.787441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.787456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.787462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.787468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.787482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.797412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.797500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.797512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.797519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.797524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.797538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.807437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.807489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.807501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.807507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.807513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.807527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.817460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.817513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.817525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.817531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.817537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.817551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.827438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.827492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.827504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.827510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.827519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.827533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.837529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.837581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.837594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.837600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.837606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.200 [2024-12-16 22:42:48.837621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.200 qpair failed and we were unable to recover it. 00:36:59.200 [2024-12-16 22:42:48.847539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.200 [2024-12-16 22:42:48.847588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.200 [2024-12-16 22:42:48.847601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.200 [2024-12-16 22:42:48.847607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.200 [2024-12-16 22:42:48.847613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.201 [2024-12-16 22:42:48.847627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.201 qpair failed and we were unable to recover it. 00:36:59.201 [2024-12-16 22:42:48.857562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.201 [2024-12-16 22:42:48.857614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.201 [2024-12-16 22:42:48.857627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.201 [2024-12-16 22:42:48.857633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.201 [2024-12-16 22:42:48.857639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.201 [2024-12-16 22:42:48.857653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.201 qpair failed and we were unable to recover it. 00:36:59.201 [2024-12-16 22:42:48.867614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.201 [2024-12-16 22:42:48.867674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.201 [2024-12-16 22:42:48.867686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.201 [2024-12-16 22:42:48.867692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.201 [2024-12-16 22:42:48.867698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.201 [2024-12-16 22:42:48.867711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.201 qpair failed and we were unable to recover it. 00:36:59.201 [2024-12-16 22:42:48.877649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.201 [2024-12-16 22:42:48.877705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.201 [2024-12-16 22:42:48.877719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.201 [2024-12-16 22:42:48.877725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.201 [2024-12-16 22:42:48.877731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.201 [2024-12-16 22:42:48.877745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.201 qpair failed and we were unable to recover it. 00:36:59.201 [2024-12-16 22:42:48.887651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.201 [2024-12-16 22:42:48.887727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.201 [2024-12-16 22:42:48.887740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.201 [2024-12-16 22:42:48.887748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.201 [2024-12-16 22:42:48.887756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.201 [2024-12-16 22:42:48.887773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.201 qpair failed and we were unable to recover it. 00:36:59.201 [2024-12-16 22:42:48.897676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.201 [2024-12-16 22:42:48.897728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.201 [2024-12-16 22:42:48.897740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.201 [2024-12-16 22:42:48.897746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.201 [2024-12-16 22:42:48.897752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.201 [2024-12-16 22:42:48.897766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.201 qpair failed and we were unable to recover it. 00:36:59.461 [2024-12-16 22:42:48.907713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.461 [2024-12-16 22:42:48.907766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.461 [2024-12-16 22:42:48.907779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.461 [2024-12-16 22:42:48.907785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.461 [2024-12-16 22:42:48.907790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.461 [2024-12-16 22:42:48.907804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.461 qpair failed and we were unable to recover it. 00:36:59.461 [2024-12-16 22:42:48.917760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.461 [2024-12-16 22:42:48.917814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.461 [2024-12-16 22:42:48.917831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.461 [2024-12-16 22:42:48.917838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.461 [2024-12-16 22:42:48.917843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.461 [2024-12-16 22:42:48.917857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.461 qpair failed and we were unable to recover it. 00:36:59.461 [2024-12-16 22:42:48.927757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.927807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.927819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.927825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.927831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.927845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.937814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.937874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.937887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.937893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.937899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.937912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.947799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.947850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.947863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.947868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.947874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.947888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.957904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.957957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.957969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.957978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.957984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.957998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.967878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.967949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.967962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.967968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.967973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.967987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.977904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.977982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.977995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.978001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.978006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.978021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.988009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.988064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.988077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.988083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.988088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.988103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:48.997969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:48.998041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:48.998054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:48.998061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:48.998067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:48.998085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:49.007991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:49.008076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:49.008089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:49.008095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:49.008100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:49.008115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:49.018071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:49.018123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:49.018135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:49.018141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:49.018147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:49.018162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:49.028067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:49.028125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:49.028137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:49.028143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:49.028149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:49.028164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:49.038084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:49.038136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:49.038149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:49.038155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:49.038160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:49.038175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:49.048156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:49.048219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:49.048232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.462 [2024-12-16 22:42:49.048239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.462 [2024-12-16 22:42:49.048244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.462 [2024-12-16 22:42:49.048259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.462 qpair failed and we were unable to recover it. 00:36:59.462 [2024-12-16 22:42:49.058149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.462 [2024-12-16 22:42:49.058209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.462 [2024-12-16 22:42:49.058223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.058229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.058235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.058249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.068178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.068281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.068293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.068300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.068306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.068320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.078226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.078321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.078334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.078341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.078346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.078361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.088229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.088281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.088295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.088304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.088310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.088325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.098253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.098308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.098321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.098327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.098333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.098347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.108275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.108328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.108341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.108347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.108353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.108367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.118327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.118381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.118393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.118400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.118406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.118420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.128335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.128389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.128401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.128407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.128413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.128431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.138372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.138427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.138440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.138446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.138452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.138466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.148392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.148450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.148462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.148469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.148474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.148488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.463 [2024-12-16 22:42:49.158501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.463 [2024-12-16 22:42:49.158567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.463 [2024-12-16 22:42:49.158579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.463 [2024-12-16 22:42:49.158586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.463 [2024-12-16 22:42:49.158592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.463 [2024-12-16 22:42:49.158605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.463 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.168518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.168587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.168600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.168606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.168612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.168626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.178492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.178542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.178555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.178561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.178567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.178581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.188529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.188608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.188621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.188627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.188633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.188646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.198583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.198651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.198663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.198669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.198675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.198689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.208583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.208638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.208651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.208657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.208662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.208676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.218655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.218705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.218722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.218729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.218734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.218750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.228673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.228733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.228745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.228751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.228757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.228771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.238672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.238728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.238740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.238746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.238752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.238766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.248704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.248759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.248771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.248777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.248783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.248797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.258781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.258844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.258856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.258863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.258871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.258885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.268698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.724 [2024-12-16 22:42:49.268777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.724 [2024-12-16 22:42:49.268790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.724 [2024-12-16 22:42:49.268796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.724 [2024-12-16 22:42:49.268802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.724 [2024-12-16 22:42:49.268816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.724 qpair failed and we were unable to recover it. 00:36:59.724 [2024-12-16 22:42:49.278804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.278870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.278883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.278889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.278895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.278909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.288846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.288910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.288923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.288930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.288935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.288950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.298840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.298893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.298906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.298912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.298918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.298932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.308895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.308973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.308986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.308993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.308999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.309013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.318929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.318983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.318995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.319002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.319007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.319022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.328934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.328987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.329000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.329006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.329012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.329027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.338961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.339035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.339049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.339056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.339062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.339077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.348974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.349029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.349046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.349052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.349058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.349073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.359012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.359070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.359082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.359089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.359094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.359109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.369035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.369088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.369101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.369108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.369113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.369128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.378989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.379058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.379073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.379081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.379088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.379105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.389082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.389139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.389152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.389158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.389167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.389182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.399140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.399225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.399238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.399244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.725 [2024-12-16 22:42:49.399250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.725 [2024-12-16 22:42:49.399264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.725 qpair failed and we were unable to recover it. 00:36:59.725 [2024-12-16 22:42:49.409152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.725 [2024-12-16 22:42:49.409266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.725 [2024-12-16 22:42:49.409280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.725 [2024-12-16 22:42:49.409286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.726 [2024-12-16 22:42:49.409291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.726 [2024-12-16 22:42:49.409307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.726 qpair failed and we were unable to recover it. 00:36:59.726 [2024-12-16 22:42:49.419101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.726 [2024-12-16 22:42:49.419155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.726 [2024-12-16 22:42:49.419168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.726 [2024-12-16 22:42:49.419174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.726 [2024-12-16 22:42:49.419180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.726 [2024-12-16 22:42:49.419198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.726 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.429268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.429326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.429339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.429345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.429351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.429365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.439249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.439305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.439317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.439324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.439329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.439343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.449246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.449328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.449340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.449347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.449352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.449367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.459306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.459358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.459370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.459377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.459383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.459397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.469369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.469424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.469436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.469442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.469448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.469463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.479330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.479382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.479395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.479402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.479407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.479422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.489309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.489363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.489375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.489382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.489388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.489402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.499389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.499453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.986 [2024-12-16 22:42:49.499466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.986 [2024-12-16 22:42:49.499472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.986 [2024-12-16 22:42:49.499478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.986 [2024-12-16 22:42:49.499493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.986 qpair failed and we were unable to recover it. 00:36:59.986 [2024-12-16 22:42:49.509383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.986 [2024-12-16 22:42:49.509436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.509450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.509456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.509462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.509476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.519480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.519537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.519549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.519559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.519564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.519579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.529477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.529532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.529544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.529551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.529556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.529571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.539611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.539708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.539720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.539727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.539732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.539746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.549557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.549610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.549622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.549628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.549634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.549649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.559541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.559595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.559607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.559613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.559619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.559637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.569564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.569646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.569658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.569664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.569670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.569685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.579575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.579633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.579646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.579653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.579658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.579673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.589628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.589684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.589697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.589703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.589709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.589723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.599756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.599809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.599822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.599828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.599833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.599848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.609712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.609793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.609806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.609812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.609818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.609832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.619725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.619820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.619833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.619839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.619845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.619859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.629737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.629788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.629801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.629807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.629813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.987 [2024-12-16 22:42:49.629827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.987 qpair failed and we were unable to recover it. 00:36:59.987 [2024-12-16 22:42:49.639819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.987 [2024-12-16 22:42:49.639905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.987 [2024-12-16 22:42:49.639918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.987 [2024-12-16 22:42:49.639924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.987 [2024-12-16 22:42:49.639929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.988 [2024-12-16 22:42:49.639944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.988 qpair failed and we were unable to recover it. 00:36:59.988 [2024-12-16 22:42:49.649892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.988 [2024-12-16 22:42:49.649945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.988 [2024-12-16 22:42:49.649957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.988 [2024-12-16 22:42:49.649966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.988 [2024-12-16 22:42:49.649972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.988 [2024-12-16 22:42:49.649986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.988 qpair failed and we were unable to recover it. 00:36:59.988 [2024-12-16 22:42:49.659825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.988 [2024-12-16 22:42:49.659877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.988 [2024-12-16 22:42:49.659889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.988 [2024-12-16 22:42:49.659896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.988 [2024-12-16 22:42:49.659901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.988 [2024-12-16 22:42:49.659915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.988 qpair failed and we were unable to recover it. 00:36:59.988 [2024-12-16 22:42:49.669968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.988 [2024-12-16 22:42:49.670072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.988 [2024-12-16 22:42:49.670085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.988 [2024-12-16 22:42:49.670092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.988 [2024-12-16 22:42:49.670098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.988 [2024-12-16 22:42:49.670112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.988 qpair failed and we were unable to recover it. 00:36:59.988 [2024-12-16 22:42:49.679936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:59.988 [2024-12-16 22:42:49.679991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:59.988 [2024-12-16 22:42:49.680004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:59.988 [2024-12-16 22:42:49.680012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:59.988 [2024-12-16 22:42:49.680018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:36:59.988 [2024-12-16 22:42:49.680033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.988 qpair failed and we were unable to recover it. 00:37:00.248 [2024-12-16 22:42:49.689964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.248 [2024-12-16 22:42:49.690025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.248 [2024-12-16 22:42:49.690038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.248 [2024-12-16 22:42:49.690045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.248 [2024-12-16 22:42:49.690050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.248 [2024-12-16 22:42:49.690070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.248 qpair failed and we were unable to recover it. 00:37:00.248 [2024-12-16 22:42:49.699995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.248 [2024-12-16 22:42:49.700044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.248 [2024-12-16 22:42:49.700057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.248 [2024-12-16 22:42:49.700063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.248 [2024-12-16 22:42:49.700069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.248 [2024-12-16 22:42:49.700084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.248 qpair failed and we were unable to recover it. 00:37:00.248 [2024-12-16 22:42:49.710030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.248 [2024-12-16 22:42:49.710086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.248 [2024-12-16 22:42:49.710099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.248 [2024-12-16 22:42:49.710106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.248 [2024-12-16 22:42:49.710111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.248 [2024-12-16 22:42:49.710126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.248 qpair failed and we were unable to recover it. 00:37:00.248 [2024-12-16 22:42:49.720095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.248 [2024-12-16 22:42:49.720159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.248 [2024-12-16 22:42:49.720172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.248 [2024-12-16 22:42:49.720179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.248 [2024-12-16 22:42:49.720184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.248 [2024-12-16 22:42:49.720204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.248 qpair failed and we were unable to recover it. 00:37:00.248 [2024-12-16 22:42:49.730150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.248 [2024-12-16 22:42:49.730238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.248 [2024-12-16 22:42:49.730251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.248 [2024-12-16 22:42:49.730257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.248 [2024-12-16 22:42:49.730263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.248 [2024-12-16 22:42:49.730278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.740151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.740211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.740224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.740230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.740236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.740250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.750204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.750261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.750274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.750279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.750286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.750302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.760102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.760152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.760165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.760171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.760177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.760195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.770223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.770284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.770296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.770303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.770308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.770323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.780232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.780306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.780322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.780328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.780334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.780348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.790306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.790359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.790372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.790378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.790384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.790397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.800294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.800347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.800359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.800366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.800371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.800386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.810316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.810369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.810382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.810388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.810394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.810408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.820349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.820405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.820417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.820423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.820432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.820447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.830409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.830465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.830479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.830485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.830491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.830505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.840396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.840454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.840467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.840473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.840479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.840494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.850429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.850484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.850497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.850504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.850510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.850524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.860417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.860507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.249 [2024-12-16 22:42:49.860522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.249 [2024-12-16 22:42:49.860530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.249 [2024-12-16 22:42:49.860537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.249 [2024-12-16 22:42:49.860552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.249 qpair failed and we were unable to recover it. 00:37:00.249 [2024-12-16 22:42:49.870528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.249 [2024-12-16 22:42:49.870633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.870645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.870652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.870658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.870673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.880496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.880586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.880599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.880605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.880611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.880625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.890573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.890624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.890637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.890643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.890649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.890662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.900575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.900629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.900642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.900648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.900654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.900668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.910626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.910709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.910724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.910731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.910736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.910750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.920707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.920787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.920800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.920806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.920811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.920826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.930662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.930717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.930730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.930736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.930742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.930756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.250 [2024-12-16 22:42:49.940737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.250 [2024-12-16 22:42:49.940792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.250 [2024-12-16 22:42:49.940805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.250 [2024-12-16 22:42:49.940811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.250 [2024-12-16 22:42:49.940817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.250 [2024-12-16 22:42:49.940831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.250 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:49.950734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:49.950788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:49.950800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:49.950806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:49.950815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:49.950829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:49.960762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:49.960820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:49.960832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:49.960838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:49.960844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:49.960858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:49.970836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:49.970922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:49.970935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:49.970941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:49.970947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:49.970961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:49.980862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:49.980922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:49.980935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:49.980942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:49.980948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:49.980961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:49.990911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:49.990966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:49.990979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:49.990985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:49.990991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:49.991005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:50.000887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:50.000943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:50.000956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:50.000962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:50.000968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:50.000982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:50.010978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:50.011042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:50.011057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:50.011064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:50.011070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:50.011086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:50.020915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:50.020985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:50.020999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:50.021006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:50.021012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:50.021028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:50.031126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:50.031241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.510 [2024-12-16 22:42:50.031256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.510 [2024-12-16 22:42:50.031263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.510 [2024-12-16 22:42:50.031269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.510 [2024-12-16 22:42:50.031285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.510 qpair failed and we were unable to recover it. 00:37:00.510 [2024-12-16 22:42:50.041056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.510 [2024-12-16 22:42:50.041170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.041183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.041190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.041201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.041215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.050993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.051052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.051068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.051075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.051081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.051097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.061062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.061127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.061140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.061147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.061153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.061169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.071152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.071223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.071236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.071242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.071248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.071264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.081154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.081216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.081229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.081239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.081245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.081260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.091203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.091266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.091284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.091291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.091297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.091315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.101232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.101289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.101303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.101309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.101315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.101330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.111215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.111328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.111341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.111348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.111354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.111370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.121171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.121232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.121245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.121252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.121258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.121276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.131269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.131348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.131361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.131367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.131373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.131388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.141275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.141329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.141341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.141348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.141354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.141368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.151398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.151481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.151494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.151501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.151507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.151521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.161301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.161359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.161372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.161379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.161384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.511 [2024-12-16 22:42:50.161399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.511 qpair failed and we were unable to recover it. 00:37:00.511 [2024-12-16 22:42:50.171399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.511 [2024-12-16 22:42:50.171474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.511 [2024-12-16 22:42:50.171487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.511 [2024-12-16 22:42:50.171493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.511 [2024-12-16 22:42:50.171499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.512 [2024-12-16 22:42:50.171514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.512 qpair failed and we were unable to recover it. 00:37:00.512 [2024-12-16 22:42:50.181414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.512 [2024-12-16 22:42:50.181470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.512 [2024-12-16 22:42:50.181484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.512 [2024-12-16 22:42:50.181491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.512 [2024-12-16 22:42:50.181497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.512 [2024-12-16 22:42:50.181512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.512 qpair failed and we were unable to recover it. 00:37:00.512 [2024-12-16 22:42:50.191390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.512 [2024-12-16 22:42:50.191447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.512 [2024-12-16 22:42:50.191460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.512 [2024-12-16 22:42:50.191467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.512 [2024-12-16 22:42:50.191473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.512 [2024-12-16 22:42:50.191487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.512 qpair failed and we were unable to recover it. 00:37:00.512 [2024-12-16 22:42:50.201410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.512 [2024-12-16 22:42:50.201502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.512 [2024-12-16 22:42:50.201515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.512 [2024-12-16 22:42:50.201521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.512 [2024-12-16 22:42:50.201527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.512 [2024-12-16 22:42:50.201541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.512 qpair failed and we were unable to recover it. 00:37:00.772 [2024-12-16 22:42:50.211500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.772 [2024-12-16 22:42:50.211592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.772 [2024-12-16 22:42:50.211607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.772 [2024-12-16 22:42:50.211614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.772 [2024-12-16 22:42:50.211619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.772 [2024-12-16 22:42:50.211633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.772 qpair failed and we were unable to recover it. 00:37:00.772 [2024-12-16 22:42:50.221525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.772 [2024-12-16 22:42:50.221574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.772 [2024-12-16 22:42:50.221586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.772 [2024-12-16 22:42:50.221593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.772 [2024-12-16 22:42:50.221598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.772 [2024-12-16 22:42:50.221613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.772 qpair failed and we were unable to recover it. 00:37:00.772 [2024-12-16 22:42:50.231560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.772 [2024-12-16 22:42:50.231615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.772 [2024-12-16 22:42:50.231628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.772 [2024-12-16 22:42:50.231634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.772 [2024-12-16 22:42:50.231640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.772 [2024-12-16 22:42:50.231654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.772 qpair failed and we were unable to recover it. 00:37:00.772 [2024-12-16 22:42:50.241621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.772 [2024-12-16 22:42:50.241675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.772 [2024-12-16 22:42:50.241687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.772 [2024-12-16 22:42:50.241694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.772 [2024-12-16 22:42:50.241699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.772 [2024-12-16 22:42:50.241714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.772 qpair failed and we were unable to recover it. 00:37:00.772 [2024-12-16 22:42:50.251720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.772 [2024-12-16 22:42:50.251773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.772 [2024-12-16 22:42:50.251786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.772 [2024-12-16 22:42:50.251793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.772 [2024-12-16 22:42:50.251798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.772 [2024-12-16 22:42:50.251815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.772 qpair failed and we were unable to recover it. 00:37:00.772 [2024-12-16 22:42:50.261679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.772 [2024-12-16 22:42:50.261775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.772 [2024-12-16 22:42:50.261787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.772 [2024-12-16 22:42:50.261793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.772 [2024-12-16 22:42:50.261799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.772 [2024-12-16 22:42:50.261813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.772 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.271711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.271766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.271778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.271784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.271790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.271805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.281727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.281790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.281803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.281809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.281815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.281829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.291790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.291844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.291857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.291863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.291870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.291884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.301740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.301793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.301805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.301812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.301819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.301833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.311790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.311867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.311880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.311886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.311892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.311906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.321856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.321910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.321922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.321929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.321935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.321949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.331774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.331833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.331846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.331852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.331858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.331872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.341897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.341949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.341966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.341973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.341978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.341993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.351933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.351990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.352003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.352009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.352015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.352029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.361853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.361905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.361917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.361924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.361930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.361944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.372007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.372076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.372090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.372098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.372103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.372119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.382001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.382102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.382115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.382121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.382130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.382144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.392020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.392095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.773 [2024-12-16 22:42:50.392108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.773 [2024-12-16 22:42:50.392114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.773 [2024-12-16 22:42:50.392120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.773 [2024-12-16 22:42:50.392134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.773 qpair failed and we were unable to recover it. 00:37:00.773 [2024-12-16 22:42:50.402051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.773 [2024-12-16 22:42:50.402109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.402121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.402128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.402134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.402148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.412078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.412130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.412143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.412149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.412155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.412169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.422114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.422167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.422180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.422186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.422195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.422210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.432131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.432187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.432204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.432211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.432217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.432232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.442174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.442234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.442247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.442254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.442260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.442274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.452184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.452242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.452255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.452261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.452267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.452282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.462247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.462311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.462324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.462331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.462337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.462351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:00.774 [2024-12-16 22:42:50.472257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:00.774 [2024-12-16 22:42:50.472313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:00.774 [2024-12-16 22:42:50.472328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:00.774 [2024-12-16 22:42:50.472334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:00.774 [2024-12-16 22:42:50.472340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:00.774 [2024-12-16 22:42:50.472355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:00.774 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.482285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.482338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.482351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.482357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.482363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.482377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.492305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.492380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.492392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.492398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.492404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.492417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.502262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.502314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.502326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.502332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.502338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.502352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.512353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.512428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.512441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.512450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.512455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.512469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.522384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.522437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.522449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.522456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.522461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.522476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.532356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.532410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.532423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.532429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.532435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.532448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.542437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.542490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.542502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.542508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.542514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.542529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.552480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.552536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.552549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.552555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.552561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.552575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.562488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.562538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.562551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.562556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.562562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.562577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.572454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.572508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.572521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.572527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.572533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.572547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.582578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.582631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.582644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.582650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.582655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.035 [2024-12-16 22:42:50.582669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-12-16 22:42:50.592552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.035 [2024-12-16 22:42:50.592616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.035 [2024-12-16 22:42:50.592629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.035 [2024-12-16 22:42:50.592635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.035 [2024-12-16 22:42:50.592640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.592656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.602645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.602710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.602723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.602729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.602735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.602749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.612652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.612729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.612741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.612747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.612753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.612767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.622611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.622659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.622672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.622678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.622684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.622698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.632707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.632759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.632772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.632778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.632784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.632799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.642672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.642731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.642744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.642753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.642759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.642773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.652777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.652828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.652841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.652847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.652852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.652866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.662724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.662780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.662792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.662798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.662804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.662818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.672832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.672886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.672898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.672904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.672910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.672924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.682874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.682929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.682941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.682947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.682953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.682971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.692873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.692960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.692973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.692980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.692986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.693000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.702912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.702980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.702993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.703000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.703006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.703021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.712930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.712993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.713006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.713012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.713018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.713032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.722889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.722945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.036 [2024-12-16 22:42:50.722957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.036 [2024-12-16 22:42:50.722964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.036 [2024-12-16 22:42:50.722969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.036 [2024-12-16 22:42:50.722983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-12-16 22:42:50.732914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.036 [2024-12-16 22:42:50.732973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.037 [2024-12-16 22:42:50.732986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.037 [2024-12-16 22:42:50.732992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.037 [2024-12-16 22:42:50.732998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.037 [2024-12-16 22:42:50.733012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.296 [2024-12-16 22:42:50.743030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.296 [2024-12-16 22:42:50.743084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.296 [2024-12-16 22:42:50.743096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.296 [2024-12-16 22:42:50.743103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.296 [2024-12-16 22:42:50.743108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.296 [2024-12-16 22:42:50.743122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.296 qpair failed and we were unable to recover it. 00:37:01.296 [2024-12-16 22:42:50.753041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.296 [2024-12-16 22:42:50.753098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.296 [2024-12-16 22:42:50.753111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.296 [2024-12-16 22:42:50.753117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.296 [2024-12-16 22:42:50.753123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.296 [2024-12-16 22:42:50.753137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.296 qpair failed and we were unable to recover it. 00:37:01.296 [2024-12-16 22:42:50.763051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.296 [2024-12-16 22:42:50.763111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.296 [2024-12-16 22:42:50.763124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.296 [2024-12-16 22:42:50.763130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.296 [2024-12-16 22:42:50.763136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.296 [2024-12-16 22:42:50.763150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.296 qpair failed and we were unable to recover it. 00:37:01.296 [2024-12-16 22:42:50.773100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.296 [2024-12-16 22:42:50.773151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.296 [2024-12-16 22:42:50.773168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.296 [2024-12-16 22:42:50.773175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.296 [2024-12-16 22:42:50.773180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.296 [2024-12-16 22:42:50.773200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.296 qpair failed and we were unable to recover it. 00:37:01.296 [2024-12-16 22:42:50.783062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.296 [2024-12-16 22:42:50.783113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.296 [2024-12-16 22:42:50.783126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.296 [2024-12-16 22:42:50.783133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.296 [2024-12-16 22:42:50.783140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.296 [2024-12-16 22:42:50.783155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.296 qpair failed and we were unable to recover it. 00:37:01.296 [2024-12-16 22:42:50.793100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.296 [2024-12-16 22:42:50.793196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.296 [2024-12-16 22:42:50.793209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.296 [2024-12-16 22:42:50.793216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.296 [2024-12-16 22:42:50.793222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe198000b90 00:37:01.297 [2024-12-16 22:42:50.793237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:01.297 qpair failed and we were unable to recover it. 00:37:01.297 [2024-12-16 22:42:50.793349] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:37:01.297 A controller has encountered a failure and is being reset. 00:37:01.297 Controller properly reset. 00:37:01.297 Initializing NVMe Controllers 00:37:01.297 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:01.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:01.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:01.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:01.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:01.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:01.297 Initialization complete. Launching workers. 00:37:01.297 Starting thread on core 1 00:37:01.297 Starting thread on core 2 00:37:01.297 Starting thread on core 3 00:37:01.297 Starting thread on core 0 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:01.297 00:37:01.297 real 0m10.773s 00:37:01.297 user 0m19.140s 00:37:01.297 sys 0m4.639s 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.297 ************************************ 00:37:01.297 END TEST nvmf_target_disconnect_tc2 00:37:01.297 ************************************ 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.297 rmmod nvme_tcp 00:37:01.297 rmmod nvme_fabrics 00:37:01.297 rmmod nvme_keyring 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 542746 ']' 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 542746 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 542746 ']' 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 542746 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.297 22:42:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542746 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542746' 00:37:01.556 killing process with pid 542746 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 542746 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 542746 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:01.556 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.557 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.557 22:42:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.099 22:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:04.099 00:37:04.099 real 0m19.535s 00:37:04.099 user 0m46.914s 00:37:04.099 sys 0m9.550s 00:37:04.099 22:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.099 22:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:04.099 ************************************ 00:37:04.099 END TEST nvmf_target_disconnect 00:37:04.099 ************************************ 00:37:04.099 22:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:04.099 00:37:04.099 real 7m22.746s 00:37:04.099 user 16m48.863s 00:37:04.099 sys 2m8.774s 00:37:04.099 22:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.099 22:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.099 ************************************ 00:37:04.099 END TEST nvmf_host 00:37:04.099 ************************************ 00:37:04.099 22:42:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:04.099 22:42:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:04.099 22:42:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:04.099 22:42:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:04.099 22:42:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.099 22:42:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:04.099 ************************************ 00:37:04.099 START TEST nvmf_target_core_interrupt_mode 00:37:04.099 ************************************ 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:04.099 * Looking for test storage... 00:37:04.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:04.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.099 --rc genhtml_branch_coverage=1 00:37:04.099 --rc genhtml_function_coverage=1 00:37:04.099 --rc genhtml_legend=1 00:37:04.099 --rc geninfo_all_blocks=1 00:37:04.099 --rc geninfo_unexecuted_blocks=1 00:37:04.099 00:37:04.099 ' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:04.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.099 --rc genhtml_branch_coverage=1 00:37:04.099 --rc genhtml_function_coverage=1 00:37:04.099 --rc genhtml_legend=1 00:37:04.099 --rc geninfo_all_blocks=1 00:37:04.099 --rc geninfo_unexecuted_blocks=1 00:37:04.099 00:37:04.099 ' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:04.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.099 --rc genhtml_branch_coverage=1 00:37:04.099 --rc genhtml_function_coverage=1 00:37:04.099 --rc genhtml_legend=1 00:37:04.099 --rc geninfo_all_blocks=1 00:37:04.099 --rc geninfo_unexecuted_blocks=1 00:37:04.099 00:37:04.099 ' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:04.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.099 --rc genhtml_branch_coverage=1 00:37:04.099 --rc genhtml_function_coverage=1 00:37:04.099 --rc genhtml_legend=1 00:37:04.099 --rc geninfo_all_blocks=1 00:37:04.099 --rc geninfo_unexecuted_blocks=1 00:37:04.099 00:37:04.099 ' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.099 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:04.100 ************************************ 00:37:04.100 START TEST nvmf_abort 00:37:04.100 ************************************ 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:04.100 * Looking for test storage... 00:37:04.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.100 --rc genhtml_branch_coverage=1 00:37:04.100 --rc genhtml_function_coverage=1 00:37:04.100 --rc genhtml_legend=1 00:37:04.100 --rc geninfo_all_blocks=1 00:37:04.100 --rc geninfo_unexecuted_blocks=1 00:37:04.100 00:37:04.100 ' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.100 --rc genhtml_branch_coverage=1 00:37:04.100 --rc genhtml_function_coverage=1 00:37:04.100 --rc genhtml_legend=1 00:37:04.100 --rc geninfo_all_blocks=1 00:37:04.100 --rc geninfo_unexecuted_blocks=1 00:37:04.100 00:37:04.100 ' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.100 --rc genhtml_branch_coverage=1 00:37:04.100 --rc genhtml_function_coverage=1 00:37:04.100 --rc genhtml_legend=1 00:37:04.100 --rc geninfo_all_blocks=1 00:37:04.100 --rc geninfo_unexecuted_blocks=1 00:37:04.100 00:37:04.100 ' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:04.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.100 --rc genhtml_branch_coverage=1 00:37:04.100 --rc genhtml_function_coverage=1 00:37:04.100 --rc genhtml_legend=1 00:37:04.100 --rc geninfo_all_blocks=1 00:37:04.100 --rc geninfo_unexecuted_blocks=1 00:37:04.100 00:37:04.100 ' 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.100 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.360 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:04.361 22:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:10.937 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:10.937 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.937 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:10.938 Found net devices under 0000:af:00.0: cvl_0_0 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:10.938 Found net devices under 0000:af:00.1: cvl_0_1 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:10.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:10.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:37:10.938 00:37:10.938 --- 10.0.0.2 ping statistics --- 00:37:10.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.938 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:10.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:10.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:37:10.938 00:37:10.938 --- 10.0.0.1 ping statistics --- 00:37:10.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.938 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=547355 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 547355 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 547355 ']' 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.938 [2024-12-16 22:42:59.757003] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:10.938 [2024-12-16 22:42:59.757917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:10.938 [2024-12-16 22:42:59.757949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.938 [2024-12-16 22:42:59.834504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:10.938 [2024-12-16 22:42:59.856122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.938 [2024-12-16 22:42:59.856159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.938 [2024-12-16 22:42:59.856166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.938 [2024-12-16 22:42:59.856172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.938 [2024-12-16 22:42:59.856177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.938 [2024-12-16 22:42:59.857460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.938 [2024-12-16 22:42:59.857565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.938 [2024-12-16 22:42:59.857566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.938 [2024-12-16 22:42:59.919281] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:10.938 [2024-12-16 22:42:59.920209] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:10.938 [2024-12-16 22:42:59.920612] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:10.938 [2024-12-16 22:42:59.920705] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.938 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.938 [2024-12-16 22:42:59.986271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.939 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:10.939 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.939 22:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.939 Malloc0 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.939 Delay0 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.939 [2024-12-16 22:43:00.074182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.939 22:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:10.939 [2024-12-16 22:43:00.195350] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:12.843 Initializing NVMe Controllers 00:37:12.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:12.843 controller IO queue size 128 less than required 00:37:12.843 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:12.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:12.843 Initialization complete. Launching workers. 00:37:12.843 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37832 00:37:12.843 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37893, failed to submit 66 00:37:12.843 success 37832, unsuccessful 61, failed 0 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:12.843 rmmod nvme_tcp 00:37:12.843 rmmod nvme_fabrics 00:37:12.843 rmmod nvme_keyring 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 547355 ']' 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 547355 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 547355 ']' 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 547355 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547355 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547355' 00:37:12.843 killing process with pid 547355 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 547355 00:37:12.843 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 547355 00:37:13.101 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:13.101 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:13.101 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:13.101 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:13.102 22:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.633 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:15.633 00:37:15.633 real 0m11.175s 00:37:15.633 user 0m10.660s 00:37:15.633 sys 0m5.728s 00:37:15.633 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.633 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.633 ************************************ 00:37:15.633 END TEST nvmf_abort 00:37:15.633 ************************************ 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:15.634 ************************************ 00:37:15.634 START TEST nvmf_ns_hotplug_stress 00:37:15.634 ************************************ 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:15.634 * Looking for test storage... 00:37:15.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:37:15.634 22:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:15.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.634 --rc genhtml_branch_coverage=1 00:37:15.634 --rc genhtml_function_coverage=1 00:37:15.634 --rc genhtml_legend=1 00:37:15.634 --rc geninfo_all_blocks=1 00:37:15.634 --rc geninfo_unexecuted_blocks=1 00:37:15.634 00:37:15.634 ' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:15.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.634 --rc genhtml_branch_coverage=1 00:37:15.634 --rc genhtml_function_coverage=1 00:37:15.634 --rc genhtml_legend=1 00:37:15.634 --rc geninfo_all_blocks=1 00:37:15.634 --rc geninfo_unexecuted_blocks=1 00:37:15.634 00:37:15.634 ' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:15.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.634 --rc genhtml_branch_coverage=1 00:37:15.634 --rc genhtml_function_coverage=1 00:37:15.634 --rc genhtml_legend=1 00:37:15.634 --rc geninfo_all_blocks=1 00:37:15.634 --rc geninfo_unexecuted_blocks=1 00:37:15.634 00:37:15.634 ' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:15.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.634 --rc genhtml_branch_coverage=1 00:37:15.634 --rc genhtml_function_coverage=1 00:37:15.634 --rc genhtml_legend=1 00:37:15.634 --rc geninfo_all_blocks=1 00:37:15.634 --rc geninfo_unexecuted_blocks=1 00:37:15.634 00:37:15.634 ' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:15.634 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:15.635 22:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:20.909 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:20.909 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:21.169 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:21.169 Found net devices under 0000:af:00.0: cvl_0_0 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:21.169 Found net devices under 0000:af:00.1: cvl_0_1 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:21.169 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:21.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:21.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:37:21.428 00:37:21.428 --- 10.0.0.2 ping statistics --- 00:37:21.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:21.428 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:21.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:21.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:37:21.428 00:37:21.428 --- 10.0.0.1 ping statistics --- 00:37:21.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:21.428 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:21.428 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:21.429 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:21.429 22:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=551138 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 551138 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 551138 ']' 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:21.429 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:21.429 [2024-12-16 22:43:11.062260] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:21.429 [2024-12-16 22:43:11.063232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:21.429 [2024-12-16 22:43:11.063266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:21.429 [2024-12-16 22:43:11.124678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:21.688 [2024-12-16 22:43:11.147703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:21.688 [2024-12-16 22:43:11.147737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:21.688 [2024-12-16 22:43:11.147744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:21.688 [2024-12-16 22:43:11.147750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:21.688 [2024-12-16 22:43:11.147755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:21.688 [2024-12-16 22:43:11.149076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.688 [2024-12-16 22:43:11.149184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.688 [2024-12-16 22:43:11.149184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:21.688 [2024-12-16 22:43:11.212588] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:21.688 [2024-12-16 22:43:11.213244] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:21.688 [2024-12-16 22:43:11.213619] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:21.688 [2024-12-16 22:43:11.213783] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:21.688 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:21.947 [2024-12-16 22:43:11.449926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.947 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:22.206 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.206 [2024-12-16 22:43:11.818374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.206 22:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:22.465 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:22.724 Malloc0 00:37:22.724 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:22.983 Delay0 00:37:22.983 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.983 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:23.242 NULL1 00:37:23.242 22:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:23.501 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=551586 00:37:23.501 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:23.501 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:23.501 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.760 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.760 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:23.760 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:24.018 true 00:37:24.018 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:24.018 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.277 22:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.536 22:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:24.536 22:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:24.536 true 00:37:24.536 22:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:24.536 22:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.912 Read completed with error (sct=0, sc=11) 00:37:25.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.913 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.913 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:25.913 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:26.171 true 00:37:26.171 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:26.171 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.430 22:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.688 22:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:26.688 22:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:26.688 true 00:37:26.688 22:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:26.688 22:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.066 22:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.324 22:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:28.324 22:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:28.324 true 00:37:28.324 22:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:28.324 22:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.260 22:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.519 22:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:29.519 22:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:29.519 true 00:37:29.519 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:29.519 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.777 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.036 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:30.036 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:30.295 true 00:37:30.295 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:30.295 22:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 22:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.672 22:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:31.672 22:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:31.930 true 00:37:31.930 22:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:31.930 22:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.866 22:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.866 22:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:32.866 22:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:33.125 true 00:37:33.125 22:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:33.125 22:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.125 22:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:33.384 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:33.384 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:33.643 true 00:37:33.643 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:33.643 22:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.021 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:35.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:35.021 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:35.021 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:35.280 true 00:37:35.280 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:35.280 22:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.222 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.222 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:36.222 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:36.482 true 00:37:36.482 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:36.482 22:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.482 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.747 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:36.747 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:37.007 true 00:37:37.007 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:37.007 22:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:37.944 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:37.944 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.203 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:38.203 22:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:38.461 true 00:37:38.461 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:38.461 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.720 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.979 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:38.979 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:38.979 true 00:37:38.979 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:38.979 22:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.356 22:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.356 22:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:40.356 22:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:40.615 true 00:37:40.616 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:40.616 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.874 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.874 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:40.874 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:41.133 true 00:37:41.133 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:41.133 22:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:42.511 22:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:42.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:42.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:42.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:42.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:42.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:42.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:42.511 22:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:42.511 22:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:42.770 true 00:37:42.770 22:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:42.770 22:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.704 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:43.704 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:43.704 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:43.963 true 00:37:43.963 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:43.963 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.963 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.221 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:44.221 22:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:44.480 true 00:37:44.480 22:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:44.480 22:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.416 22:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:45.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:45.674 22:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:45.674 22:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:45.931 true 00:37:45.931 22:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:45.931 22:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.867 22:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:46.867 22:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:46.867 22:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:47.125 true 00:37:47.125 22:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:47.125 22:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.384 22:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:47.643 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:47.643 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:47.643 true 00:37:47.643 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:47.643 22:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:49.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.062 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:49.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.062 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:49.062 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:49.062 true 00:37:49.062 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:49.062 22:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.076 22:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:50.336 22:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:50.336 22:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:50.336 true 00:37:50.336 22:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:50.336 22:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.594 22:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:50.852 22:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:50.852 22:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:51.110 true 00:37:51.110 22:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:51.110 22:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:52.046 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:52.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:52.304 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:52.304 22:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:52.304 true 00:37:52.562 22:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:52.562 22:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.128 22:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:53.386 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:53.386 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:53.644 true 00:37:53.644 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:53.644 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.644 Initializing NVMe Controllers 00:37:53.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:53.644 Controller IO queue size 128, less than required. 00:37:53.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:53.644 Controller IO queue size 128, less than required. 00:37:53.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:53.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:53.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:53.644 Initialization complete. Launching workers. 00:37:53.644 ======================================================== 00:37:53.644 Latency(us) 00:37:53.644 Device Information : IOPS MiB/s Average min max 00:37:53.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1872.89 0.91 42325.40 2879.65 1127043.70 00:37:53.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16393.28 8.00 7789.32 1569.08 369385.30 00:37:53.644 ======================================================== 00:37:53.644 Total : 18266.18 8.92 11330.42 1569.08 1127043.70 00:37:53.644 00:37:53.903 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:54.162 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:54.162 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:54.162 true 00:37:54.162 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551586 00:37:54.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (551586) - No such process 00:37:54.162 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 551586 00:37:54.162 22:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.420 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:54.679 null0 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:54.679 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:54.938 null1 00:37:54.938 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:54.938 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:54.938 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:55.197 null2 00:37:55.197 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:55.197 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:55.197 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:55.197 null3 00:37:55.197 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:55.197 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:55.197 22:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:55.455 null4 00:37:55.455 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:55.455 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:55.455 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:55.713 null5 00:37:55.713 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:55.713 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:55.713 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:55.971 null6 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:55.971 null7 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.971 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 556780 556782 556783 556785 556787 556789 556791 556792 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.972 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:56.231 22:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.489 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.490 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.748 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:57.007 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.265 22:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:57.524 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:57.783 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.042 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.043 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:58.319 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:58.319 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:58.320 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:58.320 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:58.320 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:58.320 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:58.320 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:58.320 22:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:58.578 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.578 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:58.579 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:58.837 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:59.096 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.356 22:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.615 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:59.874 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.133 rmmod nvme_tcp 00:38:00.133 rmmod nvme_fabrics 00:38:00.133 rmmod nvme_keyring 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 551138 ']' 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 551138 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 551138 ']' 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 551138 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:00.133 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551138 00:38:00.392 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:00.392 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:00.392 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551138' 00:38:00.392 killing process with pid 551138 00:38:00.392 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 551138 00:38:00.392 22:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 551138 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.392 22:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:02.927 00:38:02.927 real 0m47.255s 00:38:02.927 user 2m58.072s 00:38:02.927 sys 0m19.416s 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:02.927 ************************************ 00:38:02.927 END TEST nvmf_ns_hotplug_stress 00:38:02.927 ************************************ 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:02.927 ************************************ 00:38:02.927 START TEST nvmf_delete_subsystem 00:38:02.927 ************************************ 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:02.927 * Looking for test storage... 00:38:02.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.927 --rc genhtml_branch_coverage=1 00:38:02.927 --rc genhtml_function_coverage=1 00:38:02.927 --rc genhtml_legend=1 00:38:02.927 --rc geninfo_all_blocks=1 00:38:02.927 --rc geninfo_unexecuted_blocks=1 00:38:02.927 00:38:02.927 ' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.927 --rc genhtml_branch_coverage=1 00:38:02.927 --rc genhtml_function_coverage=1 00:38:02.927 --rc genhtml_legend=1 00:38:02.927 --rc geninfo_all_blocks=1 00:38:02.927 --rc geninfo_unexecuted_blocks=1 00:38:02.927 00:38:02.927 ' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.927 --rc genhtml_branch_coverage=1 00:38:02.927 --rc genhtml_function_coverage=1 00:38:02.927 --rc genhtml_legend=1 00:38:02.927 --rc geninfo_all_blocks=1 00:38:02.927 --rc geninfo_unexecuted_blocks=1 00:38:02.927 00:38:02.927 ' 00:38:02.927 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:02.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.927 --rc genhtml_branch_coverage=1 00:38:02.927 --rc genhtml_function_coverage=1 00:38:02.927 --rc genhtml_legend=1 00:38:02.927 --rc geninfo_all_blocks=1 00:38:02.927 --rc geninfo_unexecuted_blocks=1 00:38:02.927 00:38:02.928 ' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:02.928 22:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:09.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:09.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.498 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:09.499 Found net devices under 0000:af:00.0: cvl_0_0 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:09.499 Found net devices under 0000:af:00.1: cvl_0_1 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:09.499 22:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:09.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:38:09.499 00:38:09.499 --- 10.0.0.2 ping statistics --- 00:38:09.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.499 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:09.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:38:09.499 00:38:09.499 --- 10.0.0.1 ping statistics --- 00:38:09.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.499 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=560970 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 560970 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 560970 ']' 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.499 [2024-12-16 22:43:58.322197] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:09.499 [2024-12-16 22:43:58.323125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:09.499 [2024-12-16 22:43:58.323157] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.499 [2024-12-16 22:43:58.401402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:09.499 [2024-12-16 22:43:58.422881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.499 [2024-12-16 22:43:58.422934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.499 [2024-12-16 22:43:58.422941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.499 [2024-12-16 22:43:58.422946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.499 [2024-12-16 22:43:58.422951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.499 [2024-12-16 22:43:58.424084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.499 [2024-12-16 22:43:58.424085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.499 [2024-12-16 22:43:58.486749] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:09.499 [2024-12-16 22:43:58.487309] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:09.499 [2024-12-16 22:43:58.487471] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.499 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.500 [2024-12-16 22:43:58.552899] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.500 [2024-12-16 22:43:58.581180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.500 NULL1 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.500 Delay0 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=561093 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:09.500 22:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:09.500 [2024-12-16 22:43:58.671086] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:11.402 22:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:11.402 22:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.402 22:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 starting I/O failed: -6 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 [2024-12-16 22:44:00.759782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8f140 is same with the state(6) to be set 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Read completed with error (sct=0, sc=8) 00:38:11.402 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 starting I/O failed: -6 00:38:11.403 [2024-12-16 22:44:00.761142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa07c000c80 is same with the state(6) to be set 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Read completed with error (sct=0, sc=8) 00:38:11.403 Write completed with error (sct=0, sc=8) 00:38:12.339 [2024-12-16 22:44:01.725164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c260 is same with the state(6) to be set 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 [2024-12-16 22:44:01.763329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce35f0 is same with the state(6) to be set 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 [2024-12-16 22:44:01.764154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ec60 is same with the state(6) to be set 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 [2024-12-16 22:44:01.764894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa07c00d390 is same with the state(6) to be set 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Write completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 Read completed with error (sct=0, sc=8) 00:38:12.339 [2024-12-16 22:44:01.765546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa07c00d800 is same with the state(6) to be set 00:38:12.339 Initializing NVMe Controllers 00:38:12.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:12.339 Controller IO queue size 128, less than required. 00:38:12.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:12.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:12.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:12.339 Initialization complete. Launching workers. 00:38:12.339 ======================================================== 00:38:12.339 Latency(us) 00:38:12.339 Device Information : IOPS MiB/s Average min max 00:38:12.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.92 0.08 918169.77 289.87 1011156.67 00:38:12.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.90 0.08 914448.63 248.46 1042861.04 00:38:12.339 ======================================================== 00:38:12.339 Total : 321.82 0.16 916297.71 248.46 1042861.04 00:38:12.339 00:38:12.339 [2024-12-16 22:44:01.766230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8c260 (9): Bad file descriptor 00:38:12.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:12.339 22:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.339 22:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:12.339 22:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 561093 00:38:12.339 22:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 561093 00:38:12.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (561093) - No such process 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 561093 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 561093 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 561093 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.598 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:12.598 [2024-12-16 22:44:02.297144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=561551 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:12.857 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:12.857 [2024-12-16 22:44:02.383114] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:13.116 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:13.116 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:13.116 22:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:13.685 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:13.685 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:13.685 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:14.253 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:14.253 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:14.253 22:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:14.821 22:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:14.821 22:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:14.821 22:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:15.389 22:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:15.389 22:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:15.389 22:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:15.648 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:15.648 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:15.648 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:15.907 Initializing NVMe Controllers 00:38:15.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:15.907 Controller IO queue size 128, less than required. 00:38:15.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:15.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:15.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:15.907 Initialization complete. Launching workers. 00:38:15.907 ======================================================== 00:38:15.907 Latency(us) 00:38:15.907 Device Information : IOPS MiB/s Average min max 00:38:15.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002397.72 1000147.68 1042315.95 00:38:15.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004358.96 1000242.12 1010215.38 00:38:15.907 ======================================================== 00:38:15.907 Total : 256.00 0.12 1003378.34 1000147.68 1042315.95 00:38:15.907 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561551 00:38:16.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (561551) - No such process 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 561551 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:16.166 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:16.166 rmmod nvme_tcp 00:38:16.425 rmmod nvme_fabrics 00:38:16.425 rmmod nvme_keyring 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 560970 ']' 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 560970 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 560970 ']' 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 560970 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 560970 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 560970' 00:38:16.425 killing process with pid 560970 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 560970 00:38:16.425 22:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 560970 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:16.425 22:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:18.964 00:38:18.964 real 0m15.989s 00:38:18.964 user 0m25.980s 00:38:18.964 sys 0m5.940s 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:18.964 ************************************ 00:38:18.964 END TEST nvmf_delete_subsystem 00:38:18.964 ************************************ 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:18.964 ************************************ 00:38:18.964 START TEST nvmf_host_management 00:38:18.964 ************************************ 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:18.964 * Looking for test storage... 00:38:18.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:18.964 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:18.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.965 --rc genhtml_branch_coverage=1 00:38:18.965 --rc genhtml_function_coverage=1 00:38:18.965 --rc genhtml_legend=1 00:38:18.965 --rc geninfo_all_blocks=1 00:38:18.965 --rc geninfo_unexecuted_blocks=1 00:38:18.965 00:38:18.965 ' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:18.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.965 --rc genhtml_branch_coverage=1 00:38:18.965 --rc genhtml_function_coverage=1 00:38:18.965 --rc genhtml_legend=1 00:38:18.965 --rc geninfo_all_blocks=1 00:38:18.965 --rc geninfo_unexecuted_blocks=1 00:38:18.965 00:38:18.965 ' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:18.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.965 --rc genhtml_branch_coverage=1 00:38:18.965 --rc genhtml_function_coverage=1 00:38:18.965 --rc genhtml_legend=1 00:38:18.965 --rc geninfo_all_blocks=1 00:38:18.965 --rc geninfo_unexecuted_blocks=1 00:38:18.965 00:38:18.965 ' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:18.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.965 --rc genhtml_branch_coverage=1 00:38:18.965 --rc genhtml_function_coverage=1 00:38:18.965 --rc genhtml_legend=1 00:38:18.965 --rc geninfo_all_blocks=1 00:38:18.965 --rc geninfo_unexecuted_blocks=1 00:38:18.965 00:38:18.965 ' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.965 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:18.966 22:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:25.539 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:25.539 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:25.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:25.540 Found net devices under 0000:af:00.0: cvl_0_0 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:25.540 Found net devices under 0000:af:00.1: cvl_0_1 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:25.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:38:25.540 00:38:25.540 --- 10.0.0.2 ping statistics --- 00:38:25.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.540 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:25.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:38:25.540 00:38:25.540 --- 10.0.0.1 ping statistics --- 00:38:25.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.540 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:25.540 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=565664 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 565664 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565664 ']' 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 [2024-12-16 22:44:14.417756] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:25.541 [2024-12-16 22:44:14.418715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:25.541 [2024-12-16 22:44:14.418753] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.541 [2024-12-16 22:44:14.495481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:25.541 [2024-12-16 22:44:14.520230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.541 [2024-12-16 22:44:14.520270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.541 [2024-12-16 22:44:14.520278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:25.541 [2024-12-16 22:44:14.520283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:25.541 [2024-12-16 22:44:14.520288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.541 [2024-12-16 22:44:14.521724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.541 [2024-12-16 22:44:14.521832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:25.541 [2024-12-16 22:44:14.521940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.541 [2024-12-16 22:44:14.521941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:25.541 [2024-12-16 22:44:14.585123] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:25.541 [2024-12-16 22:44:14.586287] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:25.541 [2024-12-16 22:44:14.586720] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:25.541 [2024-12-16 22:44:14.586932] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:25.541 [2024-12-16 22:44:14.586966] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 [2024-12-16 22:44:14.662575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 Malloc0 00:38:25.541 [2024-12-16 22:44:14.746812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=565725 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 565725 /var/tmp/bdevperf.sock 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565725 ']' 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:25.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:25.541 { 00:38:25.541 "params": { 00:38:25.541 "name": "Nvme$subsystem", 00:38:25.541 "trtype": "$TEST_TRANSPORT", 00:38:25.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:25.541 "adrfam": "ipv4", 00:38:25.541 "trsvcid": "$NVMF_PORT", 00:38:25.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:25.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:25.541 "hdgst": ${hdgst:-false}, 00:38:25.541 "ddgst": ${ddgst:-false} 00:38:25.541 }, 00:38:25.541 "method": "bdev_nvme_attach_controller" 00:38:25.541 } 00:38:25.541 EOF 00:38:25.541 )") 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:25.541 22:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:25.541 "params": { 00:38:25.541 "name": "Nvme0", 00:38:25.541 "trtype": "tcp", 00:38:25.541 "traddr": "10.0.0.2", 00:38:25.541 "adrfam": "ipv4", 00:38:25.541 "trsvcid": "4420", 00:38:25.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:25.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:25.541 "hdgst": false, 00:38:25.541 "ddgst": false 00:38:25.541 }, 00:38:25.541 "method": "bdev_nvme_attach_controller" 00:38:25.541 }' 00:38:25.541 [2024-12-16 22:44:14.842492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:25.541 [2024-12-16 22:44:14.842535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565725 ] 00:38:25.541 [2024-12-16 22:44:14.916572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.541 [2024-12-16 22:44:14.939672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.541 Running I/O for 10 seconds... 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=105 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 105 -ge 100 ']' 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.542 [2024-12-16 22:44:15.186368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.186470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe370 is same with the state(6) to be set 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.542 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:25.542 [2024-12-16 22:44:15.196074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:25.542 [2024-12-16 22:44:15.196106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:25.542 [2024-12-16 22:44:15.196122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:25.542 [2024-12-16 22:44:15.196136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:25.542 [2024-12-16 22:44:15.196150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf56d40 is same with the state(6) to be set 00:38:25.542 [2024-12-16 22:44:15.196244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.542 [2024-12-16 22:44:15.196471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.542 [2024-12-16 22:44:15.196477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.196991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.196999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.197005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.197013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.197019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.197027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.197041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.543 [2024-12-16 22:44:15.197047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.543 [2024-12-16 22:44:15.197055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.197156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.544 [2024-12-16 22:44:15.197163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.544 [2024-12-16 22:44:15.198090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:25.544 task offset: 24576 on job bdev=Nvme0n1 fails 00:38:25.544 00:38:25.544 Latency(us) 00:38:25.544 [2024-12-16T21:44:15.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:25.544 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:25.544 Job: Nvme0n1 ended in about 0.11 seconds with error 00:38:25.544 Verification LBA range: start 0x0 length 0x400 00:38:25.544 Nvme0n1 : 0.11 1785.10 111.57 595.03 0.00 24771.61 1458.96 26713.72 00:38:25.544 [2024-12-16T21:44:15.245Z] =================================================================================================================== 00:38:25.544 [2024-12-16T21:44:15.245Z] Total : 1785.10 111.57 595.03 0.00 24771.61 1458.96 26713.72 00:38:25.544 [2024-12-16 22:44:15.200448] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:25.544 [2024-12-16 22:44:15.200466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf56d40 (9): Bad file descriptor 00:38:25.544 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.544 22:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:25.801 [2024-12-16 22:44:15.294377] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 565725 00:38:26.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (565725) - No such process 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:26.736 { 00:38:26.736 "params": { 00:38:26.736 "name": "Nvme$subsystem", 00:38:26.736 "trtype": "$TEST_TRANSPORT", 00:38:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.736 "adrfam": "ipv4", 00:38:26.736 "trsvcid": "$NVMF_PORT", 00:38:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.736 "hdgst": ${hdgst:-false}, 00:38:26.736 "ddgst": ${ddgst:-false} 00:38:26.736 }, 00:38:26.736 "method": "bdev_nvme_attach_controller" 00:38:26.736 } 00:38:26.736 EOF 00:38:26.736 )") 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:26.736 22:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:26.736 "params": { 00:38:26.736 "name": "Nvme0", 00:38:26.736 "trtype": "tcp", 00:38:26.736 "traddr": "10.0.0.2", 00:38:26.736 "adrfam": "ipv4", 00:38:26.736 "trsvcid": "4420", 00:38:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.736 "hdgst": false, 00:38:26.736 "ddgst": false 00:38:26.736 }, 00:38:26.736 "method": "bdev_nvme_attach_controller" 00:38:26.736 }' 00:38:26.736 [2024-12-16 22:44:16.258111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:26.736 [2024-12-16 22:44:16.258157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565955 ] 00:38:26.736 [2024-12-16 22:44:16.332854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.736 [2024-12-16 22:44:16.355244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.995 Running I/O for 1 seconds... 00:38:28.371 2048.00 IOPS, 128.00 MiB/s 00:38:28.371 Latency(us) 00:38:28.371 [2024-12-16T21:44:18.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.371 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:28.371 Verification LBA range: start 0x0 length 0x400 00:38:28.371 Nvme0n1 : 1.02 2065.03 129.06 0.00 0.00 30512.24 5024.43 26963.38 00:38:28.371 [2024-12-16T21:44:18.072Z] =================================================================================================================== 00:38:28.371 [2024-12-16T21:44:18.072Z] Total : 2065.03 129.06 0.00 0.00 30512.24 5024.43 26963.38 00:38:28.371 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:28.371 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:28.371 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:28.371 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:28.371 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:28.372 rmmod nvme_tcp 00:38:28.372 rmmod nvme_fabrics 00:38:28.372 rmmod nvme_keyring 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 565664 ']' 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 565664 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 565664 ']' 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 565664 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565664 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565664' 00:38:28.372 killing process with pid 565664 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 565664 00:38:28.372 22:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 565664 00:38:28.631 [2024-12-16 22:44:18.117112] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.631 22:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.535 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.536 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:30.536 00:38:30.536 real 0m11.964s 00:38:30.536 user 0m16.423s 00:38:30.536 sys 0m6.103s 00:38:30.536 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.536 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:30.536 ************************************ 00:38:30.536 END TEST nvmf_host_management 00:38:30.536 ************************************ 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:30.795 ************************************ 00:38:30.795 START TEST nvmf_lvol 00:38:30.795 ************************************ 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:30.795 * Looking for test storage... 00:38:30.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.795 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:30.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.795 --rc genhtml_branch_coverage=1 00:38:30.795 --rc genhtml_function_coverage=1 00:38:30.795 --rc genhtml_legend=1 00:38:30.795 --rc geninfo_all_blocks=1 00:38:30.796 --rc geninfo_unexecuted_blocks=1 00:38:30.796 00:38:30.796 ' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:30.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.796 --rc genhtml_branch_coverage=1 00:38:30.796 --rc genhtml_function_coverage=1 00:38:30.796 --rc genhtml_legend=1 00:38:30.796 --rc geninfo_all_blocks=1 00:38:30.796 --rc geninfo_unexecuted_blocks=1 00:38:30.796 00:38:30.796 ' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:30.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.796 --rc genhtml_branch_coverage=1 00:38:30.796 --rc genhtml_function_coverage=1 00:38:30.796 --rc genhtml_legend=1 00:38:30.796 --rc geninfo_all_blocks=1 00:38:30.796 --rc geninfo_unexecuted_blocks=1 00:38:30.796 00:38:30.796 ' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:30.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.796 --rc genhtml_branch_coverage=1 00:38:30.796 --rc genhtml_function_coverage=1 00:38:30.796 --rc genhtml_legend=1 00:38:30.796 --rc geninfo_all_blocks=1 00:38:30.796 --rc geninfo_unexecuted_blocks=1 00:38:30.796 00:38:30.796 ' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:30.796 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:31.055 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:31.055 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:31.055 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:31.055 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:31.055 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:31.056 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:31.056 22:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:37.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:37.623 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:37.624 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:37.624 Found net devices under 0000:af:00.0: cvl_0_0 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:37.624 Found net devices under 0000:af:00.1: cvl_0_1 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:37.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:37.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:38:37.624 00:38:37.624 --- 10.0.0.2 ping statistics --- 00:38:37.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.624 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:37.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:37.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:38:37.624 00:38:37.624 --- 10.0.0.1 ping statistics --- 00:38:37.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.624 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=569648 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 569648 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 569648 ']' 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:37.624 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:37.624 [2024-12-16 22:44:26.398455] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:37.625 [2024-12-16 22:44:26.399468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:37.625 [2024-12-16 22:44:26.399505] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:37.625 [2024-12-16 22:44:26.478897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:37.625 [2024-12-16 22:44:26.501369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:37.625 [2024-12-16 22:44:26.501405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:37.625 [2024-12-16 22:44:26.501412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:37.625 [2024-12-16 22:44:26.501418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:37.625 [2024-12-16 22:44:26.501422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:37.625 [2024-12-16 22:44:26.502684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.625 [2024-12-16 22:44:26.502793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.625 [2024-12-16 22:44:26.502794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:37.625 [2024-12-16 22:44:26.566031] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:37.625 [2024-12-16 22:44:26.566891] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:37.625 [2024-12-16 22:44:26.567327] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:37.625 [2024-12-16 22:44:26.567422] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:37.625 [2024-12-16 22:44:26.791572] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:37.625 22:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:37.625 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:37.625 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:37.625 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:37.625 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:37.883 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:38.142 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d803cb87-8bff-4449-a9fd-81b83e8473db 00:38:38.142 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d803cb87-8bff-4449-a9fd-81b83e8473db lvol 20 00:38:38.401 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=02c14fe4-d61e-4370-a3c3-af7cb113e025 00:38:38.401 22:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:38.401 22:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02c14fe4-d61e-4370-a3c3-af7cb113e025 00:38:38.660 22:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:38.918 [2024-12-16 22:44:28.407359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.918 22:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:39.176 22:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=570031 00:38:39.177 22:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:39.177 22:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:40.112 22:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 02c14fe4-d61e-4370-a3c3-af7cb113e025 MY_SNAPSHOT 00:38:40.370 22:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f3d4e771-2b77-47c1-ada2-40d66e0ad5bf 00:38:40.370 22:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 02c14fe4-d61e-4370-a3c3-af7cb113e025 30 00:38:40.628 22:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f3d4e771-2b77-47c1-ada2-40d66e0ad5bf MY_CLONE 00:38:40.886 22:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5ed50c7c-ebdb-4b48-b895-e220ece681fe 00:38:40.886 22:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5ed50c7c-ebdb-4b48-b895-e220ece681fe 00:38:41.144 22:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 570031 00:38:51.125 Initializing NVMe Controllers 00:38:51.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:51.125 Controller IO queue size 128, less than required. 00:38:51.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:51.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:51.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:51.125 Initialization complete. Launching workers. 00:38:51.125 ======================================================== 00:38:51.125 Latency(us) 00:38:51.125 Device Information : IOPS MiB/s Average min max 00:38:51.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12563.44 49.08 10191.15 260.65 64659.28 00:38:51.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12444.94 48.61 10282.96 2427.36 62554.77 00:38:51.125 ======================================================== 00:38:51.125 Total : 25008.38 97.69 10236.84 260.65 64659.28 00:38:51.125 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02c14fe4-d61e-4370-a3c3-af7cb113e025 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d803cb87-8bff-4449-a9fd-81b83e8473db 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:51.125 rmmod nvme_tcp 00:38:51.125 rmmod nvme_fabrics 00:38:51.125 rmmod nvme_keyring 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 569648 ']' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 569648 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 569648 ']' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 569648 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569648 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569648' 00:38:51.125 killing process with pid 569648 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 569648 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 569648 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.125 22:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:52.503 00:38:52.503 real 0m21.720s 00:38:52.503 user 0m55.543s 00:38:52.503 sys 0m9.702s 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:52.503 ************************************ 00:38:52.503 END TEST nvmf_lvol 00:38:52.503 ************************************ 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:52.503 ************************************ 00:38:52.503 START TEST nvmf_lvs_grow 00:38:52.503 ************************************ 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:52.503 * Looking for test storage... 00:38:52.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:52.503 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:52.763 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.764 00:38:52.764 ' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.764 00:38:52.764 ' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.764 00:38:52.764 ' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:52.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.764 --rc genhtml_branch_coverage=1 00:38:52.764 --rc genhtml_function_coverage=1 00:38:52.764 --rc genhtml_legend=1 00:38:52.764 --rc geninfo_all_blocks=1 00:38:52.764 --rc geninfo_unexecuted_blocks=1 00:38:52.764 00:38:52.764 ' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:52.764 22:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:59.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:59.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:59.335 Found net devices under 0000:af:00.0: cvl_0_0 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:59.335 Found net devices under 0000:af:00.1: cvl_0_1 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.335 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:59.336 22:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:59.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:38:59.336 00:38:59.336 --- 10.0.0.2 ping statistics --- 00:38:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.336 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:38:59.336 00:38:59.336 --- 10.0.0.1 ping statistics --- 00:38:59.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.336 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=575156 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 575156 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 575156 ']' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:59.336 [2024-12-16 22:44:48.174639] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.336 [2024-12-16 22:44:48.175554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:59.336 [2024-12-16 22:44:48.175587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.336 [2024-12-16 22:44:48.253463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.336 [2024-12-16 22:44:48.275050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.336 [2024-12-16 22:44:48.275086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.336 [2024-12-16 22:44:48.275093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.336 [2024-12-16 22:44:48.275098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.336 [2024-12-16 22:44:48.275104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.336 [2024-12-16 22:44:48.275616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.336 [2024-12-16 22:44:48.338717] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.336 [2024-12-16 22:44:48.338914] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:59.336 [2024-12-16 22:44:48.572274] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:59.336 ************************************ 00:38:59.336 START TEST lvs_grow_clean 00:38:59.336 ************************************ 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:59.336 22:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:59.595 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:38:59.595 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:38:59.595 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:59.595 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:59.595 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:59.595 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c lvol 150 00:38:59.854 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a0d16fcb-1913-491f-b1bd-73decbdff1fc 00:38:59.854 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:59.854 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:00.113 [2024-12-16 22:44:49.628002] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:00.113 [2024-12-16 22:44:49.628130] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:00.113 true 00:39:00.113 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:00.113 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:00.372 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:00.372 22:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:00.372 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0d16fcb-1913-491f-b1bd-73decbdff1fc 00:39:00.631 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:00.889 [2024-12-16 22:44:50.392463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:00.889 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:00.889 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=575641 00:39:00.890 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:00.890 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:00.890 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 575641 /var/tmp/bdevperf.sock 00:39:00.890 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 575641 ']' 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:01.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:01.148 [2024-12-16 22:44:50.634932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:01.148 [2024-12-16 22:44:50.634984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid575641 ] 00:39:01.148 [2024-12-16 22:44:50.706592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.148 [2024-12-16 22:44:50.729234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.148 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:01.149 22:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:01.407 Nvme0n1 00:39:01.407 22:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:01.666 [ 00:39:01.666 { 00:39:01.666 "name": "Nvme0n1", 00:39:01.666 "aliases": [ 00:39:01.666 "a0d16fcb-1913-491f-b1bd-73decbdff1fc" 00:39:01.666 ], 00:39:01.666 "product_name": "NVMe disk", 00:39:01.666 "block_size": 4096, 00:39:01.666 "num_blocks": 38912, 00:39:01.666 "uuid": "a0d16fcb-1913-491f-b1bd-73decbdff1fc", 00:39:01.666 "numa_id": 1, 00:39:01.666 "assigned_rate_limits": { 00:39:01.666 "rw_ios_per_sec": 0, 00:39:01.666 "rw_mbytes_per_sec": 0, 00:39:01.666 "r_mbytes_per_sec": 0, 00:39:01.666 "w_mbytes_per_sec": 0 00:39:01.666 }, 00:39:01.666 "claimed": false, 00:39:01.666 "zoned": false, 00:39:01.666 "supported_io_types": { 00:39:01.666 "read": true, 00:39:01.666 "write": true, 00:39:01.666 "unmap": true, 00:39:01.666 "flush": true, 00:39:01.666 "reset": true, 00:39:01.666 "nvme_admin": true, 00:39:01.666 "nvme_io": true, 00:39:01.666 "nvme_io_md": false, 00:39:01.666 "write_zeroes": true, 00:39:01.666 "zcopy": false, 00:39:01.666 "get_zone_info": false, 00:39:01.666 "zone_management": false, 00:39:01.666 "zone_append": false, 00:39:01.666 "compare": true, 00:39:01.666 "compare_and_write": true, 00:39:01.666 "abort": true, 00:39:01.666 "seek_hole": false, 00:39:01.666 "seek_data": false, 00:39:01.666 "copy": true, 00:39:01.666 "nvme_iov_md": false 00:39:01.666 }, 00:39:01.666 "memory_domains": [ 00:39:01.666 { 00:39:01.666 "dma_device_id": "system", 00:39:01.666 "dma_device_type": 1 00:39:01.666 } 00:39:01.666 ], 00:39:01.666 "driver_specific": { 00:39:01.666 "nvme": [ 00:39:01.666 { 00:39:01.666 "trid": { 00:39:01.666 "trtype": "TCP", 00:39:01.666 "adrfam": "IPv4", 00:39:01.666 "traddr": "10.0.0.2", 00:39:01.666 "trsvcid": "4420", 00:39:01.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:01.666 }, 00:39:01.666 "ctrlr_data": { 00:39:01.666 "cntlid": 1, 00:39:01.666 "vendor_id": "0x8086", 00:39:01.666 "model_number": "SPDK bdev Controller", 00:39:01.666 "serial_number": "SPDK0", 00:39:01.666 "firmware_revision": "25.01", 00:39:01.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.666 "oacs": { 00:39:01.666 "security": 0, 00:39:01.666 "format": 0, 00:39:01.666 "firmware": 0, 00:39:01.666 "ns_manage": 0 00:39:01.666 }, 00:39:01.666 "multi_ctrlr": true, 00:39:01.666 "ana_reporting": false 00:39:01.666 }, 00:39:01.666 "vs": { 00:39:01.666 "nvme_version": "1.3" 00:39:01.666 }, 00:39:01.666 "ns_data": { 00:39:01.666 "id": 1, 00:39:01.666 "can_share": true 00:39:01.666 } 00:39:01.666 } 00:39:01.666 ], 00:39:01.666 "mp_policy": "active_passive" 00:39:01.666 } 00:39:01.666 } 00:39:01.666 ] 00:39:01.666 22:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:01.666 22:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=575650 00:39:01.666 22:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:01.667 Running I/O for 10 seconds... 00:39:03.043 Latency(us) 00:39:03.043 [2024-12-16T21:44:52.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:03.043 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:39:03.043 [2024-12-16T21:44:52.744Z] =================================================================================================================== 00:39:03.043 [2024-12-16T21:44:52.744Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:39:03.043 00:39:03.610 22:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:03.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:03.869 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:39:03.869 [2024-12-16T21:44:53.570Z] =================================================================================================================== 00:39:03.869 [2024-12-16T21:44:53.570Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:39:03.869 00:39:03.869 true 00:39:03.869 22:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:03.869 22:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:04.128 22:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:04.128 22:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:04.128 22:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 575650 00:39:04.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:04.695 Nvme0n1 : 3.00 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:39:04.695 [2024-12-16T21:44:54.396Z] =================================================================================================================== 00:39:04.695 [2024-12-16T21:44:54.396Z] Total : 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:39:04.695 00:39:06.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:06.070 Nvme0n1 : 4.00 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:39:06.070 [2024-12-16T21:44:55.771Z] =================================================================================================================== 00:39:06.070 [2024-12-16T21:44:55.771Z] Total : 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:39:06.070 00:39:06.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:06.638 Nvme0n1 : 5.00 23451.00 91.61 0.00 0.00 0.00 0.00 0.00 00:39:06.638 [2024-12-16T21:44:56.339Z] =================================================================================================================== 00:39:06.638 [2024-12-16T21:44:56.339Z] Total : 23451.00 91.61 0.00 0.00 0.00 0.00 0.00 00:39:06.638 00:39:08.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.014 Nvme0n1 : 6.00 23500.67 91.80 0.00 0.00 0.00 0.00 0.00 00:39:08.014 [2024-12-16T21:44:57.715Z] =================================================================================================================== 00:39:08.014 [2024-12-16T21:44:57.715Z] Total : 23500.67 91.80 0.00 0.00 0.00 0.00 0.00 00:39:08.014 00:39:08.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.950 Nvme0n1 : 7.00 23545.29 91.97 0.00 0.00 0.00 0.00 0.00 00:39:08.950 [2024-12-16T21:44:58.651Z] =================================================================================================================== 00:39:08.950 [2024-12-16T21:44:58.651Z] Total : 23545.29 91.97 0.00 0.00 0.00 0.00 0.00 00:39:08.950 00:39:09.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:09.886 Nvme0n1 : 8.00 23570.75 92.07 0.00 0.00 0.00 0.00 0.00 00:39:09.886 [2024-12-16T21:44:59.587Z] =================================================================================================================== 00:39:09.886 [2024-12-16T21:44:59.587Z] Total : 23570.75 92.07 0.00 0.00 0.00 0.00 0.00 00:39:09.886 00:39:10.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:10.822 Nvme0n1 : 9.00 23576.44 92.10 0.00 0.00 0.00 0.00 0.00 00:39:10.822 [2024-12-16T21:45:00.523Z] =================================================================================================================== 00:39:10.822 [2024-12-16T21:45:00.523Z] Total : 23576.44 92.10 0.00 0.00 0.00 0.00 0.00 00:39:10.822 00:39:11.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:11.759 Nvme0n1 : 10.00 23593.70 92.16 0.00 0.00 0.00 0.00 0.00 00:39:11.759 [2024-12-16T21:45:01.460Z] =================================================================================================================== 00:39:11.759 [2024-12-16T21:45:01.460Z] Total : 23593.70 92.16 0.00 0.00 0.00 0.00 0.00 00:39:11.759 00:39:11.759 00:39:11.759 Latency(us) 00:39:11.759 [2024-12-16T21:45:01.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:11.759 Nvme0n1 : 10.00 23592.13 92.16 0.00 0.00 5421.98 3105.16 27837.20 00:39:11.759 [2024-12-16T21:45:01.460Z] =================================================================================================================== 00:39:11.759 [2024-12-16T21:45:01.460Z] Total : 23592.13 92.16 0.00 0.00 5421.98 3105.16 27837.20 00:39:11.759 { 00:39:11.759 "results": [ 00:39:11.759 { 00:39:11.759 "job": "Nvme0n1", 00:39:11.759 "core_mask": "0x2", 00:39:11.759 "workload": "randwrite", 00:39:11.759 "status": "finished", 00:39:11.759 "queue_depth": 128, 00:39:11.759 "io_size": 4096, 00:39:11.759 "runtime": 10.003419, 00:39:11.759 "iops": 23592.13384943688, 00:39:11.759 "mibps": 92.15677284936281, 00:39:11.759 "io_failed": 0, 00:39:11.759 "io_timeout": 0, 00:39:11.759 "avg_latency_us": 5421.980528623446, 00:39:11.759 "min_latency_us": 3105.158095238095, 00:39:11.759 "max_latency_us": 27837.196190476192 00:39:11.759 } 00:39:11.759 ], 00:39:11.759 "core_count": 1 00:39:11.759 } 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 575641 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 575641 ']' 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 575641 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 575641 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 575641' 00:39:11.759 killing process with pid 575641 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 575641 00:39:11.759 Received shutdown signal, test time was about 10.000000 seconds 00:39:11.759 00:39:11.759 Latency(us) 00:39:11.759 [2024-12-16T21:45:01.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.759 [2024-12-16T21:45:01.460Z] =================================================================================================================== 00:39:11.759 [2024-12-16T21:45:01.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:11.759 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 575641 00:39:12.018 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:12.277 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:12.277 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:12.277 22:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:12.536 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:12.536 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:12.536 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:12.810 [2024-12-16 22:45:02.332098] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:12.810 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:13.103 request: 00:39:13.103 { 00:39:13.103 "uuid": "7bebfe3c-1bb6-499a-9fb4-218eedc8e85c", 00:39:13.103 "method": "bdev_lvol_get_lvstores", 00:39:13.103 "req_id": 1 00:39:13.103 } 00:39:13.103 Got JSON-RPC error response 00:39:13.103 response: 00:39:13.103 { 00:39:13.103 "code": -19, 00:39:13.103 "message": "No such device" 00:39:13.103 } 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:13.103 aio_bdev 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a0d16fcb-1913-491f-b1bd-73decbdff1fc 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a0d16fcb-1913-491f-b1bd-73decbdff1fc 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:13.103 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:13.384 22:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a0d16fcb-1913-491f-b1bd-73decbdff1fc -t 2000 00:39:13.667 [ 00:39:13.667 { 00:39:13.667 "name": "a0d16fcb-1913-491f-b1bd-73decbdff1fc", 00:39:13.667 "aliases": [ 00:39:13.667 "lvs/lvol" 00:39:13.667 ], 00:39:13.667 "product_name": "Logical Volume", 00:39:13.667 "block_size": 4096, 00:39:13.667 "num_blocks": 38912, 00:39:13.667 "uuid": "a0d16fcb-1913-491f-b1bd-73decbdff1fc", 00:39:13.667 "assigned_rate_limits": { 00:39:13.667 "rw_ios_per_sec": 0, 00:39:13.667 "rw_mbytes_per_sec": 0, 00:39:13.667 "r_mbytes_per_sec": 0, 00:39:13.667 "w_mbytes_per_sec": 0 00:39:13.667 }, 00:39:13.667 "claimed": false, 00:39:13.667 "zoned": false, 00:39:13.667 "supported_io_types": { 00:39:13.667 "read": true, 00:39:13.667 "write": true, 00:39:13.667 "unmap": true, 00:39:13.667 "flush": false, 00:39:13.667 "reset": true, 00:39:13.667 "nvme_admin": false, 00:39:13.667 "nvme_io": false, 00:39:13.667 "nvme_io_md": false, 00:39:13.667 "write_zeroes": true, 00:39:13.667 "zcopy": false, 00:39:13.668 "get_zone_info": false, 00:39:13.668 "zone_management": false, 00:39:13.668 "zone_append": false, 00:39:13.668 "compare": false, 00:39:13.668 "compare_and_write": false, 00:39:13.668 "abort": false, 00:39:13.668 "seek_hole": true, 00:39:13.668 "seek_data": true, 00:39:13.668 "copy": false, 00:39:13.668 "nvme_iov_md": false 00:39:13.668 }, 00:39:13.668 "driver_specific": { 00:39:13.668 "lvol": { 00:39:13.668 "lvol_store_uuid": "7bebfe3c-1bb6-499a-9fb4-218eedc8e85c", 00:39:13.668 "base_bdev": "aio_bdev", 00:39:13.668 "thin_provision": false, 00:39:13.668 "num_allocated_clusters": 38, 00:39:13.668 "snapshot": false, 00:39:13.668 "clone": false, 00:39:13.668 "esnap_clone": false 00:39:13.668 } 00:39:13.668 } 00:39:13.668 } 00:39:13.668 ] 00:39:13.668 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:13.668 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:13.668 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:13.668 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:13.668 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:13.668 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:13.943 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:13.943 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0d16fcb-1913-491f-b1bd-73decbdff1fc 00:39:14.237 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7bebfe3c-1bb6-499a-9fb4-218eedc8e85c 00:39:14.237 22:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:14.498 00:39:14.498 real 0m15.487s 00:39:14.498 user 0m14.881s 00:39:14.498 sys 0m1.570s 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:14.498 ************************************ 00:39:14.498 END TEST lvs_grow_clean 00:39:14.498 ************************************ 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:14.498 ************************************ 00:39:14.498 START TEST lvs_grow_dirty 00:39:14.498 ************************************ 00:39:14.498 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:14.499 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:14.757 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:14.757 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:14.757 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:15.015 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:15.015 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:15.015 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:15.274 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:15.274 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:15.274 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c894994-4d75-4bf7-8710-9161a7509cf5 lvol 150 00:39:15.533 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:15.533 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:15.533 22:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:15.533 [2024-12-16 22:45:05.143992] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:15.533 [2024-12-16 22:45:05.144116] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:15.533 true 00:39:15.533 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:15.533 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:15.791 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:15.791 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:16.050 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:16.050 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:16.382 [2024-12-16 22:45:05.880428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.382 22:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=578355 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 578355 /var/tmp/bdevperf.sock 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 578355 ']' 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:16.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:16.642 [2024-12-16 22:45:06.138229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:16.642 [2024-12-16 22:45:06.138280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578355 ] 00:39:16.642 [2024-12-16 22:45:06.211276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.642 [2024-12-16 22:45:06.233531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:16.642 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:16.901 Nvme0n1 00:39:17.161 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:17.161 [ 00:39:17.161 { 00:39:17.161 "name": "Nvme0n1", 00:39:17.161 "aliases": [ 00:39:17.161 "fd4b2f4e-1273-401c-ad06-5a5b8976cce2" 00:39:17.161 ], 00:39:17.161 "product_name": "NVMe disk", 00:39:17.161 "block_size": 4096, 00:39:17.161 "num_blocks": 38912, 00:39:17.161 "uuid": "fd4b2f4e-1273-401c-ad06-5a5b8976cce2", 00:39:17.161 "numa_id": 1, 00:39:17.161 "assigned_rate_limits": { 00:39:17.161 "rw_ios_per_sec": 0, 00:39:17.161 "rw_mbytes_per_sec": 0, 00:39:17.161 "r_mbytes_per_sec": 0, 00:39:17.161 "w_mbytes_per_sec": 0 00:39:17.161 }, 00:39:17.161 "claimed": false, 00:39:17.161 "zoned": false, 00:39:17.161 "supported_io_types": { 00:39:17.161 "read": true, 00:39:17.161 "write": true, 00:39:17.161 "unmap": true, 00:39:17.161 "flush": true, 00:39:17.161 "reset": true, 00:39:17.161 "nvme_admin": true, 00:39:17.161 "nvme_io": true, 00:39:17.161 "nvme_io_md": false, 00:39:17.161 "write_zeroes": true, 00:39:17.161 "zcopy": false, 00:39:17.161 "get_zone_info": false, 00:39:17.161 "zone_management": false, 00:39:17.161 "zone_append": false, 00:39:17.161 "compare": true, 00:39:17.161 "compare_and_write": true, 00:39:17.161 "abort": true, 00:39:17.161 "seek_hole": false, 00:39:17.161 "seek_data": false, 00:39:17.161 "copy": true, 00:39:17.161 "nvme_iov_md": false 00:39:17.161 }, 00:39:17.161 "memory_domains": [ 00:39:17.161 { 00:39:17.161 "dma_device_id": "system", 00:39:17.161 "dma_device_type": 1 00:39:17.161 } 00:39:17.161 ], 00:39:17.161 "driver_specific": { 00:39:17.161 "nvme": [ 00:39:17.161 { 00:39:17.161 "trid": { 00:39:17.161 "trtype": "TCP", 00:39:17.161 "adrfam": "IPv4", 00:39:17.161 "traddr": "10.0.0.2", 00:39:17.161 "trsvcid": "4420", 00:39:17.161 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:17.161 }, 00:39:17.161 "ctrlr_data": { 00:39:17.161 "cntlid": 1, 00:39:17.161 "vendor_id": "0x8086", 00:39:17.161 "model_number": "SPDK bdev Controller", 00:39:17.161 "serial_number": "SPDK0", 00:39:17.161 "firmware_revision": "25.01", 00:39:17.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:17.161 "oacs": { 00:39:17.161 "security": 0, 00:39:17.161 "format": 0, 00:39:17.161 "firmware": 0, 00:39:17.161 "ns_manage": 0 00:39:17.161 }, 00:39:17.161 "multi_ctrlr": true, 00:39:17.161 "ana_reporting": false 00:39:17.161 }, 00:39:17.161 "vs": { 00:39:17.161 "nvme_version": "1.3" 00:39:17.161 }, 00:39:17.161 "ns_data": { 00:39:17.161 "id": 1, 00:39:17.161 "can_share": true 00:39:17.161 } 00:39:17.161 } 00:39:17.161 ], 00:39:17.161 "mp_policy": "active_passive" 00:39:17.161 } 00:39:17.161 } 00:39:17.161 ] 00:39:17.161 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=578544 00:39:17.161 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:17.161 22:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:17.420 Running I/O for 10 seconds... 00:39:18.355 Latency(us) 00:39:18.355 [2024-12-16T21:45:08.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:18.355 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:39:18.355 [2024-12-16T21:45:08.056Z] =================================================================================================================== 00:39:18.355 [2024-12-16T21:45:08.056Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:39:18.355 00:39:19.291 22:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:19.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:19.291 Nvme0n1 : 2.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:39:19.291 [2024-12-16T21:45:08.993Z] =================================================================================================================== 00:39:19.292 [2024-12-16T21:45:08.993Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:39:19.292 00:39:19.292 true 00:39:19.292 22:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:19.292 22:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:19.550 22:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:19.550 22:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:19.550 22:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 578544 00:39:20.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:20.486 Nvme0n1 : 3.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:39:20.486 [2024-12-16T21:45:10.187Z] =================================================================================================================== 00:39:20.486 [2024-12-16T21:45:10.187Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:39:20.486 00:39:21.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:21.422 Nvme0n1 : 4.00 23154.25 90.45 0.00 0.00 0.00 0.00 0.00 00:39:21.422 [2024-12-16T21:45:11.123Z] =================================================================================================================== 00:39:21.422 [2024-12-16T21:45:11.123Z] Total : 23154.25 90.45 0.00 0.00 0.00 0.00 0.00 00:39:21.422 00:39:22.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:22.357 Nvme0n1 : 5.00 23298.60 91.01 0.00 0.00 0.00 0.00 0.00 00:39:22.357 [2024-12-16T21:45:12.058Z] =================================================================================================================== 00:39:22.357 [2024-12-16T21:45:12.058Z] Total : 23298.60 91.01 0.00 0.00 0.00 0.00 0.00 00:39:22.357 00:39:23.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:23.292 Nvme0n1 : 6.00 23394.83 91.39 0.00 0.00 0.00 0.00 0.00 00:39:23.292 [2024-12-16T21:45:12.993Z] =================================================================================================================== 00:39:23.292 [2024-12-16T21:45:12.993Z] Total : 23394.83 91.39 0.00 0.00 0.00 0.00 0.00 00:39:23.292 00:39:24.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:24.228 Nvme0n1 : 7.00 23436.43 91.55 0.00 0.00 0.00 0.00 0.00 00:39:24.228 [2024-12-16T21:45:13.929Z] =================================================================================================================== 00:39:24.228 [2024-12-16T21:45:13.929Z] Total : 23436.43 91.55 0.00 0.00 0.00 0.00 0.00 00:39:24.228 00:39:25.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:25.603 Nvme0n1 : 8.00 23489.50 91.76 0.00 0.00 0.00 0.00 0.00 00:39:25.603 [2024-12-16T21:45:15.304Z] =================================================================================================================== 00:39:25.603 [2024-12-16T21:45:15.304Z] Total : 23489.50 91.76 0.00 0.00 0.00 0.00 0.00 00:39:25.603 00:39:26.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:26.538 Nvme0n1 : 9.00 23532.44 91.92 0.00 0.00 0.00 0.00 0.00 00:39:26.538 [2024-12-16T21:45:16.239Z] =================================================================================================================== 00:39:26.538 [2024-12-16T21:45:16.239Z] Total : 23532.44 91.92 0.00 0.00 0.00 0.00 0.00 00:39:26.538 00:39:27.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:27.474 Nvme0n1 : 10.00 23566.80 92.06 0.00 0.00 0.00 0.00 0.00 00:39:27.474 [2024-12-16T21:45:17.175Z] =================================================================================================================== 00:39:27.474 [2024-12-16T21:45:17.175Z] Total : 23566.80 92.06 0.00 0.00 0.00 0.00 0.00 00:39:27.474 00:39:27.474 00:39:27.474 Latency(us) 00:39:27.474 [2024-12-16T21:45:17.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:27.474 Nvme0n1 : 10.00 23570.68 92.07 0.00 0.00 5427.44 2371.78 26214.40 00:39:27.474 [2024-12-16T21:45:17.175Z] =================================================================================================================== 00:39:27.474 [2024-12-16T21:45:17.175Z] Total : 23570.68 92.07 0.00 0.00 5427.44 2371.78 26214.40 00:39:27.474 { 00:39:27.474 "results": [ 00:39:27.474 { 00:39:27.474 "job": "Nvme0n1", 00:39:27.474 "core_mask": "0x2", 00:39:27.474 "workload": "randwrite", 00:39:27.474 "status": "finished", 00:39:27.474 "queue_depth": 128, 00:39:27.474 "io_size": 4096, 00:39:27.474 "runtime": 10.003784, 00:39:27.474 "iops": 23570.68085436471, 00:39:27.474 "mibps": 92.07297208736215, 00:39:27.474 "io_failed": 0, 00:39:27.474 "io_timeout": 0, 00:39:27.474 "avg_latency_us": 5427.436923183801, 00:39:27.474 "min_latency_us": 2371.7790476190476, 00:39:27.474 "max_latency_us": 26214.4 00:39:27.474 } 00:39:27.474 ], 00:39:27.474 "core_count": 1 00:39:27.474 } 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 578355 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 578355 ']' 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 578355 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578355 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578355' 00:39:27.474 killing process with pid 578355 00:39:27.474 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 578355 00:39:27.474 Received shutdown signal, test time was about 10.000000 seconds 00:39:27.474 00:39:27.474 Latency(us) 00:39:27.474 [2024-12-16T21:45:17.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.474 [2024-12-16T21:45:17.175Z] =================================================================================================================== 00:39:27.475 [2024-12-16T21:45:17.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:27.475 22:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 578355 00:39:27.475 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:27.733 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:27.992 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:27.992 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 575156 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 575156 00:39:28.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 575156 Killed "${NVMF_APP[@]}" "$@" 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=580464 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 580464 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 580464 ']' 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:28.252 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:28.252 [2024-12-16 22:45:17.795500] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:28.252 [2024-12-16 22:45:17.796402] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:28.252 [2024-12-16 22:45:17.796437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.252 [2024-12-16 22:45:17.872507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.252 [2024-12-16 22:45:17.893513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:28.252 [2024-12-16 22:45:17.893549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:28.252 [2024-12-16 22:45:17.893556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:28.252 [2024-12-16 22:45:17.893561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:28.252 [2024-12-16 22:45:17.893566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:28.252 [2024-12-16 22:45:17.894079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.511 [2024-12-16 22:45:17.956105] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:28.511 [2024-12-16 22:45:17.956314] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:28.511 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:28.511 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:28.511 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:28.511 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:28.511 22:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:28.511 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:28.511 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:28.511 [2024-12-16 22:45:18.191443] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:28.511 [2024-12-16 22:45:18.191645] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:28.511 [2024-12-16 22:45:18.191728] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:28.770 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd4b2f4e-1273-401c-ad06-5a5b8976cce2 -t 2000 00:39:29.029 [ 00:39:29.029 { 00:39:29.029 "name": "fd4b2f4e-1273-401c-ad06-5a5b8976cce2", 00:39:29.029 "aliases": [ 00:39:29.029 "lvs/lvol" 00:39:29.029 ], 00:39:29.029 "product_name": "Logical Volume", 00:39:29.029 "block_size": 4096, 00:39:29.029 "num_blocks": 38912, 00:39:29.029 "uuid": "fd4b2f4e-1273-401c-ad06-5a5b8976cce2", 00:39:29.029 "assigned_rate_limits": { 00:39:29.029 "rw_ios_per_sec": 0, 00:39:29.029 "rw_mbytes_per_sec": 0, 00:39:29.029 "r_mbytes_per_sec": 0, 00:39:29.029 "w_mbytes_per_sec": 0 00:39:29.029 }, 00:39:29.029 "claimed": false, 00:39:29.029 "zoned": false, 00:39:29.029 "supported_io_types": { 00:39:29.029 "read": true, 00:39:29.029 "write": true, 00:39:29.029 "unmap": true, 00:39:29.029 "flush": false, 00:39:29.029 "reset": true, 00:39:29.029 "nvme_admin": false, 00:39:29.029 "nvme_io": false, 00:39:29.029 "nvme_io_md": false, 00:39:29.029 "write_zeroes": true, 00:39:29.029 "zcopy": false, 00:39:29.029 "get_zone_info": false, 00:39:29.029 "zone_management": false, 00:39:29.029 "zone_append": false, 00:39:29.029 "compare": false, 00:39:29.029 "compare_and_write": false, 00:39:29.029 "abort": false, 00:39:29.029 "seek_hole": true, 00:39:29.029 "seek_data": true, 00:39:29.029 "copy": false, 00:39:29.029 "nvme_iov_md": false 00:39:29.029 }, 00:39:29.029 "driver_specific": { 00:39:29.029 "lvol": { 00:39:29.029 "lvol_store_uuid": "3c894994-4d75-4bf7-8710-9161a7509cf5", 00:39:29.029 "base_bdev": "aio_bdev", 00:39:29.029 "thin_provision": false, 00:39:29.029 "num_allocated_clusters": 38, 00:39:29.029 "snapshot": false, 00:39:29.029 "clone": false, 00:39:29.029 "esnap_clone": false 00:39:29.029 } 00:39:29.029 } 00:39:29.029 } 00:39:29.029 ] 00:39:29.029 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:29.029 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:29.029 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:29.288 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:29.288 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:29.288 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:29.288 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:29.288 22:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:29.547 [2024-12-16 22:45:19.126530] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:29.547 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:29.805 request: 00:39:29.805 { 00:39:29.805 "uuid": "3c894994-4d75-4bf7-8710-9161a7509cf5", 00:39:29.805 "method": "bdev_lvol_get_lvstores", 00:39:29.805 "req_id": 1 00:39:29.805 } 00:39:29.805 Got JSON-RPC error response 00:39:29.805 response: 00:39:29.805 { 00:39:29.805 "code": -19, 00:39:29.805 "message": "No such device" 00:39:29.805 } 00:39:29.805 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:29.805 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:29.805 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:29.805 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:29.805 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:30.063 aio_bdev 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:30.063 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fd4b2f4e-1273-401c-ad06-5a5b8976cce2 -t 2000 00:39:30.322 [ 00:39:30.322 { 00:39:30.322 "name": "fd4b2f4e-1273-401c-ad06-5a5b8976cce2", 00:39:30.322 "aliases": [ 00:39:30.322 "lvs/lvol" 00:39:30.322 ], 00:39:30.322 "product_name": "Logical Volume", 00:39:30.322 "block_size": 4096, 00:39:30.322 "num_blocks": 38912, 00:39:30.322 "uuid": "fd4b2f4e-1273-401c-ad06-5a5b8976cce2", 00:39:30.322 "assigned_rate_limits": { 00:39:30.322 "rw_ios_per_sec": 0, 00:39:30.322 "rw_mbytes_per_sec": 0, 00:39:30.322 "r_mbytes_per_sec": 0, 00:39:30.322 "w_mbytes_per_sec": 0 00:39:30.322 }, 00:39:30.322 "claimed": false, 00:39:30.322 "zoned": false, 00:39:30.322 "supported_io_types": { 00:39:30.322 "read": true, 00:39:30.322 "write": true, 00:39:30.322 "unmap": true, 00:39:30.322 "flush": false, 00:39:30.322 "reset": true, 00:39:30.322 "nvme_admin": false, 00:39:30.322 "nvme_io": false, 00:39:30.322 "nvme_io_md": false, 00:39:30.322 "write_zeroes": true, 00:39:30.322 "zcopy": false, 00:39:30.322 "get_zone_info": false, 00:39:30.322 "zone_management": false, 00:39:30.322 "zone_append": false, 00:39:30.322 "compare": false, 00:39:30.322 "compare_and_write": false, 00:39:30.322 "abort": false, 00:39:30.322 "seek_hole": true, 00:39:30.322 "seek_data": true, 00:39:30.322 "copy": false, 00:39:30.322 "nvme_iov_md": false 00:39:30.322 }, 00:39:30.322 "driver_specific": { 00:39:30.322 "lvol": { 00:39:30.322 "lvol_store_uuid": "3c894994-4d75-4bf7-8710-9161a7509cf5", 00:39:30.322 "base_bdev": "aio_bdev", 00:39:30.322 "thin_provision": false, 00:39:30.322 "num_allocated_clusters": 38, 00:39:30.322 "snapshot": false, 00:39:30.322 "clone": false, 00:39:30.322 "esnap_clone": false 00:39:30.322 } 00:39:30.322 } 00:39:30.322 } 00:39:30.322 ] 00:39:30.322 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:30.322 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:30.322 22:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:30.581 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:30.581 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:30.581 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:30.840 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:30.840 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fd4b2f4e-1273-401c-ad06-5a5b8976cce2 00:39:30.840 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c894994-4d75-4bf7-8710-9161a7509cf5 00:39:31.099 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:31.358 00:39:31.358 real 0m16.741s 00:39:31.358 user 0m34.173s 00:39:31.358 sys 0m3.789s 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:31.358 ************************************ 00:39:31.358 END TEST lvs_grow_dirty 00:39:31.358 ************************************ 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:31.358 22:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:31.358 nvmf_trace.0 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.358 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.358 rmmod nvme_tcp 00:39:31.358 rmmod nvme_fabrics 00:39:31.358 rmmod nvme_keyring 00:39:31.616 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.616 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 580464 ']' 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 580464 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 580464 ']' 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 580464 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580464 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580464' 00:39:31.617 killing process with pid 580464 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 580464 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 580464 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.617 22:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.152 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:34.152 00:39:34.152 real 0m41.290s 00:39:34.152 user 0m51.523s 00:39:34.152 sys 0m10.161s 00:39:34.152 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:34.153 ************************************ 00:39:34.153 END TEST nvmf_lvs_grow 00:39:34.153 ************************************ 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:34.153 ************************************ 00:39:34.153 START TEST nvmf_bdev_io_wait 00:39:34.153 ************************************ 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:34.153 * Looking for test storage... 00:39:34.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.153 --rc genhtml_branch_coverage=1 00:39:34.153 --rc genhtml_function_coverage=1 00:39:34.153 --rc genhtml_legend=1 00:39:34.153 --rc geninfo_all_blocks=1 00:39:34.153 --rc geninfo_unexecuted_blocks=1 00:39:34.153 00:39:34.153 ' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.153 --rc genhtml_branch_coverage=1 00:39:34.153 --rc genhtml_function_coverage=1 00:39:34.153 --rc genhtml_legend=1 00:39:34.153 --rc geninfo_all_blocks=1 00:39:34.153 --rc geninfo_unexecuted_blocks=1 00:39:34.153 00:39:34.153 ' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.153 --rc genhtml_branch_coverage=1 00:39:34.153 --rc genhtml_function_coverage=1 00:39:34.153 --rc genhtml_legend=1 00:39:34.153 --rc geninfo_all_blocks=1 00:39:34.153 --rc geninfo_unexecuted_blocks=1 00:39:34.153 00:39:34.153 ' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.153 --rc genhtml_branch_coverage=1 00:39:34.153 --rc genhtml_function_coverage=1 00:39:34.153 --rc genhtml_legend=1 00:39:34.153 --rc geninfo_all_blocks=1 00:39:34.153 --rc geninfo_unexecuted_blocks=1 00:39:34.153 00:39:34.153 ' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.153 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:34.154 22:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:40.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:40.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:40.722 Found net devices under 0000:af:00.0: cvl_0_0 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:40.722 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:40.723 Found net devices under 0000:af:00.1: cvl_0_1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:40.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:39:40.723 00:39:40.723 --- 10.0.0.2 ping statistics --- 00:39:40.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.723 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:40.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:39:40.723 00:39:40.723 --- 10.0.0.1 ping statistics --- 00:39:40.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.723 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=584438 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 584438 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 584438 ']' 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.723 [2024-12-16 22:45:29.555587] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:40.723 [2024-12-16 22:45:29.556495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.723 [2024-12-16 22:45:29.556528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.723 [2024-12-16 22:45:29.633024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:40.723 [2024-12-16 22:45:29.656799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.723 [2024-12-16 22:45:29.656839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.723 [2024-12-16 22:45:29.656845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.723 [2024-12-16 22:45:29.656851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.723 [2024-12-16 22:45:29.656856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.723 [2024-12-16 22:45:29.658156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.723 [2024-12-16 22:45:29.658264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.723 [2024-12-16 22:45:29.658300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.723 [2024-12-16 22:45:29.658301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:40.723 [2024-12-16 22:45:29.658703] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.723 [2024-12-16 22:45:29.803563] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:40.723 [2024-12-16 22:45:29.804148] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:40.723 [2024-12-16 22:45:29.804178] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:40.723 [2024-12-16 22:45:29.804330] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.723 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.724 [2024-12-16 22:45:29.815072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.724 Malloc0 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:40.724 [2024-12-16 22:45:29.887373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=584462 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=584464 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:40.724 { 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme$subsystem", 00:39:40.724 "trtype": "$TEST_TRANSPORT", 00:39:40.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.724 "adrfam": "ipv4", 00:39:40.724 "trsvcid": "$NVMF_PORT", 00:39:40.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.724 "hdgst": ${hdgst:-false}, 00:39:40.724 "ddgst": ${ddgst:-false} 00:39:40.724 }, 00:39:40.724 "method": "bdev_nvme_attach_controller" 00:39:40.724 } 00:39:40.724 EOF 00:39:40.724 )") 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=584466 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:40.724 { 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme$subsystem", 00:39:40.724 "trtype": "$TEST_TRANSPORT", 00:39:40.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.724 "adrfam": "ipv4", 00:39:40.724 "trsvcid": "$NVMF_PORT", 00:39:40.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.724 "hdgst": ${hdgst:-false}, 00:39:40.724 "ddgst": ${ddgst:-false} 00:39:40.724 }, 00:39:40.724 "method": "bdev_nvme_attach_controller" 00:39:40.724 } 00:39:40.724 EOF 00:39:40.724 )") 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=584469 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:40.724 { 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme$subsystem", 00:39:40.724 "trtype": "$TEST_TRANSPORT", 00:39:40.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.724 "adrfam": "ipv4", 00:39:40.724 "trsvcid": "$NVMF_PORT", 00:39:40.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.724 "hdgst": ${hdgst:-false}, 00:39:40.724 "ddgst": ${ddgst:-false} 00:39:40.724 }, 00:39:40.724 "method": "bdev_nvme_attach_controller" 00:39:40.724 } 00:39:40.724 EOF 00:39:40.724 )") 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:40.724 { 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme$subsystem", 00:39:40.724 "trtype": "$TEST_TRANSPORT", 00:39:40.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.724 "adrfam": "ipv4", 00:39:40.724 "trsvcid": "$NVMF_PORT", 00:39:40.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.724 "hdgst": ${hdgst:-false}, 00:39:40.724 "ddgst": ${ddgst:-false} 00:39:40.724 }, 00:39:40.724 "method": "bdev_nvme_attach_controller" 00:39:40.724 } 00:39:40.724 EOF 00:39:40.724 )") 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 584462 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme1", 00:39:40.724 "trtype": "tcp", 00:39:40.724 "traddr": "10.0.0.2", 00:39:40.724 "adrfam": "ipv4", 00:39:40.724 "trsvcid": "4420", 00:39:40.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.724 "hdgst": false, 00:39:40.724 "ddgst": false 00:39:40.724 }, 00:39:40.724 "method": "bdev_nvme_attach_controller" 00:39:40.724 }' 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme1", 00:39:40.724 "trtype": "tcp", 00:39:40.724 "traddr": "10.0.0.2", 00:39:40.724 "adrfam": "ipv4", 00:39:40.724 "trsvcid": "4420", 00:39:40.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.724 "hdgst": false, 00:39:40.724 "ddgst": false 00:39:40.724 }, 00:39:40.724 "method": "bdev_nvme_attach_controller" 00:39:40.724 }' 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:40.724 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:40.724 "params": { 00:39:40.724 "name": "Nvme1", 00:39:40.724 "trtype": "tcp", 00:39:40.725 "traddr": "10.0.0.2", 00:39:40.725 "adrfam": "ipv4", 00:39:40.725 "trsvcid": "4420", 00:39:40.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.725 "hdgst": false, 00:39:40.725 "ddgst": false 00:39:40.725 }, 00:39:40.725 "method": "bdev_nvme_attach_controller" 00:39:40.725 }' 00:39:40.725 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:40.725 22:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:40.725 "params": { 00:39:40.725 "name": "Nvme1", 00:39:40.725 "trtype": "tcp", 00:39:40.725 "traddr": "10.0.0.2", 00:39:40.725 "adrfam": "ipv4", 00:39:40.725 "trsvcid": "4420", 00:39:40.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.725 "hdgst": false, 00:39:40.725 "ddgst": false 00:39:40.725 }, 00:39:40.725 "method": "bdev_nvme_attach_controller" 00:39:40.725 }' 00:39:40.725 [2024-12-16 22:45:29.937217] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.725 [2024-12-16 22:45:29.937270] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:40.725 [2024-12-16 22:45:29.938707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.725 [2024-12-16 22:45:29.938751] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:40.725 [2024-12-16 22:45:29.941263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.725 [2024-12-16 22:45:29.941306] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:40.725 [2024-12-16 22:45:29.941877] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:40.725 [2024-12-16 22:45:29.941915] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:40.725 [2024-12-16 22:45:30.129548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.725 [2024-12-16 22:45:30.146981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:40.725 [2024-12-16 22:45:30.229594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.725 [2024-12-16 22:45:30.250503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:40.725 [2024-12-16 22:45:30.287842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.725 [2024-12-16 22:45:30.303881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:40.725 [2024-12-16 22:45:30.342663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.725 [2024-12-16 22:45:30.358781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:40.983 Running I/O for 1 seconds... 00:39:40.983 Running I/O for 1 seconds... 00:39:40.983 Running I/O for 1 seconds... 00:39:40.983 Running I/O for 1 seconds... 00:39:41.920 241472.00 IOPS, 943.25 MiB/s 00:39:41.920 Latency(us) 00:39:41.920 [2024-12-16T21:45:31.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.920 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:41.920 Nvme1n1 : 1.00 241094.97 941.78 0.00 0.00 527.80 220.40 1568.18 00:39:41.920 [2024-12-16T21:45:31.621Z] =================================================================================================================== 00:39:41.920 [2024-12-16T21:45:31.621Z] Total : 241094.97 941.78 0.00 0.00 527.80 220.40 1568.18 00:39:41.920 13346.00 IOPS, 52.13 MiB/s 00:39:41.920 Latency(us) 00:39:41.920 [2024-12-16T21:45:31.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.920 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:41.920 Nvme1n1 : 1.01 13411.85 52.39 0.00 0.00 9517.00 1412.14 10985.08 00:39:41.920 [2024-12-16T21:45:31.621Z] =================================================================================================================== 00:39:41.920 [2024-12-16T21:45:31.621Z] Total : 13411.85 52.39 0.00 0.00 9517.00 1412.14 10985.08 00:39:41.920 10269.00 IOPS, 40.11 MiB/s 00:39:41.920 Latency(us) 00:39:41.920 [2024-12-16T21:45:31.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.920 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:41.920 Nvme1n1 : 1.01 10324.02 40.33 0.00 0.00 12352.65 4244.24 14293.09 00:39:41.920 [2024-12-16T21:45:31.621Z] =================================================================================================================== 00:39:41.920 [2024-12-16T21:45:31.621Z] Total : 10324.02 40.33 0.00 0.00 12352.65 4244.24 14293.09 00:39:41.920 10471.00 IOPS, 40.90 MiB/s 00:39:41.920 Latency(us) 00:39:41.920 [2024-12-16T21:45:31.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.920 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:41.920 Nvme1n1 : 1.01 10562.15 41.26 0.00 0.00 12088.92 1583.79 18724.57 00:39:41.920 [2024-12-16T21:45:31.621Z] =================================================================================================================== 00:39:41.920 [2024-12-16T21:45:31.621Z] Total : 10562.15 41.26 0.00 0.00 12088.92 1583.79 18724.57 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 584464 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 584466 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 584469 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:42.189 rmmod nvme_tcp 00:39:42.189 rmmod nvme_fabrics 00:39:42.189 rmmod nvme_keyring 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 584438 ']' 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 584438 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 584438 ']' 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 584438 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584438 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584438' 00:39:42.189 killing process with pid 584438 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 584438 00:39:42.189 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 584438 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:42.447 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.448 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.448 22:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.353 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:44.353 00:39:44.353 real 0m10.583s 00:39:44.353 user 0m14.368s 00:39:44.353 sys 0m6.470s 00:39:44.353 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:44.353 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:44.353 ************************************ 00:39:44.353 END TEST nvmf_bdev_io_wait 00:39:44.353 ************************************ 00:39:44.612 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:44.612 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:44.613 ************************************ 00:39:44.613 START TEST nvmf_queue_depth 00:39:44.613 ************************************ 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:44.613 * Looking for test storage... 00:39:44.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:44.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.613 --rc genhtml_branch_coverage=1 00:39:44.613 --rc genhtml_function_coverage=1 00:39:44.613 --rc genhtml_legend=1 00:39:44.613 --rc geninfo_all_blocks=1 00:39:44.613 --rc geninfo_unexecuted_blocks=1 00:39:44.613 00:39:44.613 ' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:44.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.613 --rc genhtml_branch_coverage=1 00:39:44.613 --rc genhtml_function_coverage=1 00:39:44.613 --rc genhtml_legend=1 00:39:44.613 --rc geninfo_all_blocks=1 00:39:44.613 --rc geninfo_unexecuted_blocks=1 00:39:44.613 00:39:44.613 ' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:44.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.613 --rc genhtml_branch_coverage=1 00:39:44.613 --rc genhtml_function_coverage=1 00:39:44.613 --rc genhtml_legend=1 00:39:44.613 --rc geninfo_all_blocks=1 00:39:44.613 --rc geninfo_unexecuted_blocks=1 00:39:44.613 00:39:44.613 ' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:44.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.613 --rc genhtml_branch_coverage=1 00:39:44.613 --rc genhtml_function_coverage=1 00:39:44.613 --rc genhtml_legend=1 00:39:44.613 --rc geninfo_all_blocks=1 00:39:44.613 --rc geninfo_unexecuted_blocks=1 00:39:44.613 00:39:44.613 ' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:44.613 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:44.614 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:44.873 22:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:51.444 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:51.444 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:51.444 Found net devices under 0000:af:00.0: cvl_0_0 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:51.444 Found net devices under 0000:af:00.1: cvl_0_1 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:51.444 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:51.445 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:51.445 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:51.445 22:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:51.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:51.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:39:51.445 00:39:51.445 --- 10.0.0.2 ping statistics --- 00:39:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.445 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:51.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:51.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:39:51.445 00:39:51.445 --- 10.0.0.1 ping statistics --- 00:39:51.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.445 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=588175 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 588175 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588175 ']' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 [2024-12-16 22:45:40.188694] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:51.445 [2024-12-16 22:45:40.189626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:51.445 [2024-12-16 22:45:40.189657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:51.445 [2024-12-16 22:45:40.271010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.445 [2024-12-16 22:45:40.292182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:51.445 [2024-12-16 22:45:40.292232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:51.445 [2024-12-16 22:45:40.292242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:51.445 [2024-12-16 22:45:40.292248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:51.445 [2024-12-16 22:45:40.292253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:51.445 [2024-12-16 22:45:40.292731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.445 [2024-12-16 22:45:40.354593] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:51.445 [2024-12-16 22:45:40.354814] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 [2024-12-16 22:45:40.421406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 Malloc0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 [2024-12-16 22:45:40.501541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=588235 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 588235 /var/tmp/bdevperf.sock 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588235 ']' 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:51.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.445 [2024-12-16 22:45:40.553497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:51.445 [2024-12-16 22:45:40.553541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588235 ] 00:39:51.445 [2024-12-16 22:45:40.627642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.445 [2024-12-16 22:45:40.650728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:51.445 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:51.446 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.446 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:51.446 NVMe0n1 00:39:51.446 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.446 22:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:51.446 Running I/O for 10 seconds... 00:39:53.317 11870.00 IOPS, 46.37 MiB/s [2024-12-16T21:45:43.954Z] 12253.00 IOPS, 47.86 MiB/s [2024-12-16T21:45:45.341Z] 12280.67 IOPS, 47.97 MiB/s [2024-12-16T21:45:46.276Z] 12370.50 IOPS, 48.32 MiB/s [2024-12-16T21:45:47.212Z] 12410.40 IOPS, 48.48 MiB/s [2024-12-16T21:45:48.148Z] 12418.17 IOPS, 48.51 MiB/s [2024-12-16T21:45:49.084Z] 12427.57 IOPS, 48.55 MiB/s [2024-12-16T21:45:50.020Z] 12442.00 IOPS, 48.60 MiB/s [2024-12-16T21:45:50.957Z] 12472.56 IOPS, 48.72 MiB/s [2024-12-16T21:45:51.216Z] 12469.00 IOPS, 48.71 MiB/s 00:40:01.515 Latency(us) 00:40:01.515 [2024-12-16T21:45:51.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.515 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:01.515 Verification LBA range: start 0x0 length 0x4000 00:40:01.515 NVMe0n1 : 10.06 12488.24 48.78 0.00 0.00 81726.36 18974.23 52428.80 00:40:01.515 [2024-12-16T21:45:51.216Z] =================================================================================================================== 00:40:01.515 [2024-12-16T21:45:51.216Z] Total : 12488.24 48.78 0.00 0.00 81726.36 18974.23 52428.80 00:40:01.515 { 00:40:01.515 "results": [ 00:40:01.515 { 00:40:01.515 "job": "NVMe0n1", 00:40:01.515 "core_mask": "0x1", 00:40:01.515 "workload": "verify", 00:40:01.515 "status": "finished", 00:40:01.515 "verify_range": { 00:40:01.515 "start": 0, 00:40:01.515 "length": 16384 00:40:01.515 }, 00:40:01.515 "queue_depth": 1024, 00:40:01.515 "io_size": 4096, 00:40:01.515 "runtime": 10.064107, 00:40:01.515 "iops": 12488.241629386492, 00:40:01.515 "mibps": 48.78219386479098, 00:40:01.515 "io_failed": 0, 00:40:01.515 "io_timeout": 0, 00:40:01.515 "avg_latency_us": 81726.35623092565, 00:40:01.515 "min_latency_us": 18974.23238095238, 00:40:01.515 "max_latency_us": 52428.8 00:40:01.515 } 00:40:01.515 ], 00:40:01.515 "core_count": 1 00:40:01.515 } 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 588235 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588235 ']' 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588235 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588235 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588235' 00:40:01.515 killing process with pid 588235 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588235 00:40:01.515 Received shutdown signal, test time was about 10.000000 seconds 00:40:01.515 00:40:01.515 Latency(us) 00:40:01.515 [2024-12-16T21:45:51.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.515 [2024-12-16T21:45:51.216Z] =================================================================================================================== 00:40:01.515 [2024-12-16T21:45:51.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:01.515 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588235 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:01.774 rmmod nvme_tcp 00:40:01.774 rmmod nvme_fabrics 00:40:01.774 rmmod nvme_keyring 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 588175 ']' 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 588175 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588175 ']' 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588175 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588175 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588175' 00:40:01.774 killing process with pid 588175 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588175 00:40:01.774 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588175 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.033 22:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.939 00:40:03.939 real 0m19.472s 00:40:03.939 user 0m22.609s 00:40:03.939 sys 0m6.077s 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:03.939 ************************************ 00:40:03.939 END TEST nvmf_queue_depth 00:40:03.939 ************************************ 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.939 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:04.198 ************************************ 00:40:04.198 START TEST nvmf_target_multipath 00:40:04.198 ************************************ 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:04.198 * Looking for test storage... 00:40:04.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:04.198 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:04.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.199 --rc genhtml_branch_coverage=1 00:40:04.199 --rc genhtml_function_coverage=1 00:40:04.199 --rc genhtml_legend=1 00:40:04.199 --rc geninfo_all_blocks=1 00:40:04.199 --rc geninfo_unexecuted_blocks=1 00:40:04.199 00:40:04.199 ' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:04.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.199 --rc genhtml_branch_coverage=1 00:40:04.199 --rc genhtml_function_coverage=1 00:40:04.199 --rc genhtml_legend=1 00:40:04.199 --rc geninfo_all_blocks=1 00:40:04.199 --rc geninfo_unexecuted_blocks=1 00:40:04.199 00:40:04.199 ' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:04.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.199 --rc genhtml_branch_coverage=1 00:40:04.199 --rc genhtml_function_coverage=1 00:40:04.199 --rc genhtml_legend=1 00:40:04.199 --rc geninfo_all_blocks=1 00:40:04.199 --rc geninfo_unexecuted_blocks=1 00:40:04.199 00:40:04.199 ' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:04.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.199 --rc genhtml_branch_coverage=1 00:40:04.199 --rc genhtml_function_coverage=1 00:40:04.199 --rc genhtml_legend=1 00:40:04.199 --rc geninfo_all_blocks=1 00:40:04.199 --rc geninfo_unexecuted_blocks=1 00:40:04.199 00:40:04.199 ' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:04.199 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:04.200 22:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:10.767 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:10.767 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:10.768 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:10.768 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:10.768 Found net devices under 0000:af:00.0: cvl_0_0 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:10.768 Found net devices under 0000:af:00.1: cvl_0_1 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:10.768 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:10.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:10.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:40:10.769 00:40:10.769 --- 10.0.0.2 ping statistics --- 00:40:10.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.769 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:10.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:10.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:40:10.769 00:40:10.769 --- 10.0.0.1 ping statistics --- 00:40:10.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:10.769 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:10.769 only one NIC for nvmf test 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:10.769 rmmod nvme_tcp 00:40:10.769 rmmod nvme_fabrics 00:40:10.769 rmmod nvme_keyring 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:10.769 22:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.147 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:12.148 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:12.148 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:12.148 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:12.148 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:12.407 00:40:12.407 real 0m8.234s 00:40:12.407 user 0m1.835s 00:40:12.407 sys 0m4.401s 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:12.407 ************************************ 00:40:12.407 END TEST nvmf_target_multipath 00:40:12.407 ************************************ 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:12.407 ************************************ 00:40:12.407 START TEST nvmf_zcopy 00:40:12.407 ************************************ 00:40:12.407 22:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:12.407 * Looking for test storage... 00:40:12.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:12.407 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:12.407 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:40:12.407 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:12.667 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.668 --rc genhtml_branch_coverage=1 00:40:12.668 --rc genhtml_function_coverage=1 00:40:12.668 --rc genhtml_legend=1 00:40:12.668 --rc geninfo_all_blocks=1 00:40:12.668 --rc geninfo_unexecuted_blocks=1 00:40:12.668 00:40:12.668 ' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.668 --rc genhtml_branch_coverage=1 00:40:12.668 --rc genhtml_function_coverage=1 00:40:12.668 --rc genhtml_legend=1 00:40:12.668 --rc geninfo_all_blocks=1 00:40:12.668 --rc geninfo_unexecuted_blocks=1 00:40:12.668 00:40:12.668 ' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.668 --rc genhtml_branch_coverage=1 00:40:12.668 --rc genhtml_function_coverage=1 00:40:12.668 --rc genhtml_legend=1 00:40:12.668 --rc geninfo_all_blocks=1 00:40:12.668 --rc geninfo_unexecuted_blocks=1 00:40:12.668 00:40:12.668 ' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:12.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.668 --rc genhtml_branch_coverage=1 00:40:12.668 --rc genhtml_function_coverage=1 00:40:12.668 --rc genhtml_legend=1 00:40:12.668 --rc geninfo_all_blocks=1 00:40:12.668 --rc geninfo_unexecuted_blocks=1 00:40:12.668 00:40:12.668 ' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:12.668 22:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:19.239 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:19.239 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:19.239 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:19.240 Found net devices under 0000:af:00.0: cvl_0_0 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:19.240 Found net devices under 0000:af:00.1: cvl_0_1 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.240 22:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:19.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:40:19.240 00:40:19.240 --- 10.0.0.2 ping statistics --- 00:40:19.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.240 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:40:19.240 00:40:19.240 --- 10.0.0.1 ping statistics --- 00:40:19.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.240 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=596744 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 596744 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 596744 ']' 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.240 [2024-12-16 22:46:08.113935] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:19.240 [2024-12-16 22:46:08.114853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:19.240 [2024-12-16 22:46:08.114884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.240 [2024-12-16 22:46:08.191567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.240 [2024-12-16 22:46:08.212564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.240 [2024-12-16 22:46:08.212600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.240 [2024-12-16 22:46:08.212606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.240 [2024-12-16 22:46:08.212612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.240 [2024-12-16 22:46:08.212617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.240 [2024-12-16 22:46:08.213109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.240 [2024-12-16 22:46:08.274773] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:19.240 [2024-12-16 22:46:08.274987] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:19.240 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 [2024-12-16 22:46:08.337784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 [2024-12-16 22:46:08.366001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 malloc0 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:19.241 { 00:40:19.241 "params": { 00:40:19.241 "name": "Nvme$subsystem", 00:40:19.241 "trtype": "$TEST_TRANSPORT", 00:40:19.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:19.241 "adrfam": "ipv4", 00:40:19.241 "trsvcid": "$NVMF_PORT", 00:40:19.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:19.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:19.241 "hdgst": ${hdgst:-false}, 00:40:19.241 "ddgst": ${ddgst:-false} 00:40:19.241 }, 00:40:19.241 "method": "bdev_nvme_attach_controller" 00:40:19.241 } 00:40:19.241 EOF 00:40:19.241 )") 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:19.241 22:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:19.241 "params": { 00:40:19.241 "name": "Nvme1", 00:40:19.241 "trtype": "tcp", 00:40:19.241 "traddr": "10.0.0.2", 00:40:19.241 "adrfam": "ipv4", 00:40:19.241 "trsvcid": "4420", 00:40:19.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:19.241 "hdgst": false, 00:40:19.241 "ddgst": false 00:40:19.241 }, 00:40:19.241 "method": "bdev_nvme_attach_controller" 00:40:19.241 }' 00:40:19.241 [2024-12-16 22:46:08.462760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:19.241 [2024-12-16 22:46:08.462816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596904 ] 00:40:19.241 [2024-12-16 22:46:08.536268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.241 [2024-12-16 22:46:08.558853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.241 Running I/O for 10 seconds... 00:40:21.114 8562.00 IOPS, 66.89 MiB/s [2024-12-16T21:46:11.753Z] 8587.00 IOPS, 67.09 MiB/s [2024-12-16T21:46:13.133Z] 8614.33 IOPS, 67.30 MiB/s [2024-12-16T21:46:14.071Z] 8633.00 IOPS, 67.45 MiB/s [2024-12-16T21:46:15.008Z] 8653.60 IOPS, 67.61 MiB/s [2024-12-16T21:46:15.945Z] 8657.83 IOPS, 67.64 MiB/s [2024-12-16T21:46:16.881Z] 8664.86 IOPS, 67.69 MiB/s [2024-12-16T21:46:17.818Z] 8662.75 IOPS, 67.68 MiB/s [2024-12-16T21:46:18.756Z] 8666.33 IOPS, 67.71 MiB/s [2024-12-16T21:46:18.756Z] 8668.60 IOPS, 67.72 MiB/s 00:40:29.055 Latency(us) 00:40:29.055 [2024-12-16T21:46:18.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.055 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:29.055 Verification LBA range: start 0x0 length 0x1000 00:40:29.055 Nvme1n1 : 10.01 8672.09 67.75 0.00 0.00 14717.85 1209.30 20846.69 00:40:29.055 [2024-12-16T21:46:18.756Z] =================================================================================================================== 00:40:29.055 [2024-12-16T21:46:18.756Z] Total : 8672.09 67.75 0.00 0.00 14717.85 1209.30 20846.69 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=598469 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:29.314 { 00:40:29.314 "params": { 00:40:29.314 "name": "Nvme$subsystem", 00:40:29.314 "trtype": "$TEST_TRANSPORT", 00:40:29.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:29.314 "adrfam": "ipv4", 00:40:29.314 "trsvcid": "$NVMF_PORT", 00:40:29.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:29.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:29.314 "hdgst": ${hdgst:-false}, 00:40:29.314 "ddgst": ${ddgst:-false} 00:40:29.314 }, 00:40:29.314 "method": "bdev_nvme_attach_controller" 00:40:29.314 } 00:40:29.314 EOF 00:40:29.314 )") 00:40:29.314 [2024-12-16 22:46:18.901458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.314 [2024-12-16 22:46:18.901490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:29.314 22:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:29.314 "params": { 00:40:29.314 "name": "Nvme1", 00:40:29.314 "trtype": "tcp", 00:40:29.314 "traddr": "10.0.0.2", 00:40:29.314 "adrfam": "ipv4", 00:40:29.314 "trsvcid": "4420", 00:40:29.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:29.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:29.314 "hdgst": false, 00:40:29.315 "ddgst": false 00:40:29.315 }, 00:40:29.315 "method": "bdev_nvme_attach_controller" 00:40:29.315 }' 00:40:29.315 [2024-12-16 22:46:18.913427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.913438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.925422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.925432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.937419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.937433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.941435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:29.315 [2024-12-16 22:46:18.941474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598469 ] 00:40:29.315 [2024-12-16 22:46:18.949430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.949440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.961419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.961428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.973420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.973429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.985422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.985431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:18.997422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:18.997431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:19.009419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.315 [2024-12-16 22:46:19.009427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.315 [2024-12-16 22:46:19.014699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:29.574 [2024-12-16 22:46:19.021432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.021442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.033420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.033432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.037291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.574 [2024-12-16 22:46:19.045422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.045432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.057431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.057450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.069425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.069440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.081422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.081433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.093424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.093436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.105423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.574 [2024-12-16 22:46:19.105434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.574 [2024-12-16 22:46:19.117431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.117449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.129432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.129453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.141480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.141495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.153426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.153439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.165427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.165442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.177426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.177443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 Running I/O for 5 seconds... 00:40:29.575 [2024-12-16 22:46:19.195107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.195126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.209614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.209632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.220364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.220383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.235647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.235665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.250207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.250241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.575 [2024-12-16 22:46:19.265529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.575 [2024-12-16 22:46:19.265548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.278497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.278516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.293380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.293399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.307905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.307922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.322307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.322325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.337206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.337225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.351305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.351323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.365894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.365911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.381745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.381763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.397882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.397908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.413591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.413614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.424824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.424843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.439546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.439565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.834 [2024-12-16 22:46:19.454495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.834 [2024-12-16 22:46:19.454514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.835 [2024-12-16 22:46:19.469478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.835 [2024-12-16 22:46:19.469497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.835 [2024-12-16 22:46:19.483118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.835 [2024-12-16 22:46:19.483135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.835 [2024-12-16 22:46:19.498213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.835 [2024-12-16 22:46:19.498232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.835 [2024-12-16 22:46:19.512677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.835 [2024-12-16 22:46:19.512695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.835 [2024-12-16 22:46:19.526458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.835 [2024-12-16 22:46:19.526477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.537681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.537699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.551507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.551525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.566351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.566369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.581087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.581106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.594403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.594421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.609116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.609135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.622004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.622022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.637786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.637804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.648897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.648915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.663303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.094 [2024-12-16 22:46:19.663321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.094 [2024-12-16 22:46:19.677996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.678015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.693975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.693994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.710130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.710148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.725835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.725853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.741220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.741241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.755264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.755283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.770043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.770061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.095 [2024-12-16 22:46:19.785453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.095 [2024-12-16 22:46:19.785472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.796982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.797000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.811354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.811373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.825903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.825921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.841324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.841342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.855503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.855526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.870240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.870258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.885524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.885542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.896277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.896294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.910920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.910938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.925608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.925626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.936250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.936268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.950556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.950574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.965266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.965288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.979603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.979622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:19.994162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:19.994180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:20.009536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:20.009554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:20.020297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:20.020315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:20.034902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:20.034920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.354 [2024-12-16 22:46:20.050339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.354 [2024-12-16 22:46:20.050359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.060932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.060950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.075206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.075229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.091182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.091208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.105607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.105626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.118056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.118074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.133263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.133281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.144798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.144816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.159592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.159611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.174617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.174636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 16706.00 IOPS, 130.52 MiB/s [2024-12-16T21:46:20.315Z] [2024-12-16 22:46:20.189472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.189491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.200969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.200987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.215505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.215523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.230282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.230300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.244985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.245003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.258248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.258265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.273748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.273765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.285652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.285670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.298827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.614 [2024-12-16 22:46:20.298845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.614 [2024-12-16 22:46:20.309818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.615 [2024-12-16 22:46:20.309836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.322810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.322828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.337650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.337668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.349047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.349064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.363108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.363125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.377997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.378014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.393427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.393445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.406782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.406800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.421532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.421549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.433818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.433835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.447065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.447087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.461652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.461670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.473658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.473676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.487292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.487310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.501972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.501990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.517308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.517326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.530422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.530440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.544956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.544973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.558512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.558529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.875 [2024-12-16 22:46:20.570056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.875 [2024-12-16 22:46:20.570073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.583135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.583152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.597661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.597679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.608500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.608517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.623101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.623119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.638061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.638078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.653111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.653129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.667104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.667123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.681763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.681780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.697033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.697051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.711633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.711655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.726218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.726235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.741657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.741675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.754653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.754672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.769267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.769285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.782953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.782970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.797408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.797426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.809889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.809907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.823553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.823571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.138 [2024-12-16 22:46:20.838588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.138 [2024-12-16 22:46:20.838606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.853500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.853518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.865980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.866000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.879291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.879309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.893881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.893901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.909111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.909130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.923638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.923657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.938317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.938336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.953597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.953615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.966970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.966988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.981995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.982017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:20.997813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:20.997831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:21.013521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:21.013539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:21.027240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:21.027259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.472 [2024-12-16 22:46:21.041919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.472 [2024-12-16 22:46:21.041937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.057282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.057301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.070747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.070765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.085719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.085736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.101401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.101420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.115431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.115450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.130038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.130056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.145106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.145125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.473 [2024-12-16 22:46:21.158489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.473 [2024-12-16 22:46:21.158508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.171113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.171131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.186017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.186035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 16804.50 IOPS, 131.29 MiB/s [2024-12-16T21:46:21.485Z] [2024-12-16 22:46:21.201999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.202018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.216640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.216659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.230228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.230246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.243121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.243139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.257653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.257671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.269079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.269097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.283097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.283115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.297831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.297849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.310150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.310167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.323395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.323413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.338423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.338451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.353534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.353552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.367029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.367047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.381901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.381919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.397347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.397367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.411668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.411686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.426364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.426382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.441601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.441619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.784 [2024-12-16 22:46:21.455468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.784 [2024-12-16 22:46:21.455486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.470332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.470350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.485170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.485188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.499223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.499241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.513419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.513437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.526516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.526533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.541089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.541107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.554442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.554460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.569301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.569319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.582137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.582155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.595336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.595354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.610285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.610302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.625573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.625591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.635998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.636016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.650553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.650572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.665752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.665768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.681807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.681824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.694290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.694307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.709428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.709447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.720481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.720506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.735235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.735253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.749749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.749766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.093 [2024-12-16 22:46:21.765297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.093 [2024-12-16 22:46:21.765316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.778980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.779000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.793652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.793672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.807080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.807098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.821792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.821810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.837498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.837517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.850296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.850315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.865607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.865626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.878384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.878402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.893242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.893260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.907705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.907723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.922149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.922167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.937204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.937238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.951781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.951799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.966164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.966181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.981871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.981888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.379 [2024-12-16 22:46:21.997419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.379 [2024-12-16 22:46:21.997437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.380 [2024-12-16 22:46:22.008732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.380 [2024-12-16 22:46:22.008750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.380 [2024-12-16 22:46:22.023451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.380 [2024-12-16 22:46:22.023469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.380 [2024-12-16 22:46:22.037825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.380 [2024-12-16 22:46:22.037842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.380 [2024-12-16 22:46:22.050977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.380 [2024-12-16 22:46:22.050995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.380 [2024-12-16 22:46:22.065311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.380 [2024-12-16 22:46:22.065329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.380 [2024-12-16 22:46:22.079423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.380 [2024-12-16 22:46:22.079441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.094274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.094296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.109713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.109730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.121373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.121390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.135561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.135579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.150023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.150041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.166014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.166036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.180847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.180865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.193710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.193727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 16809.00 IOPS, 131.32 MiB/s [2024-12-16T21:46:22.353Z] [2024-12-16 22:46:22.209817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.209834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.224986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.225003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.236410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.236428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.251608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.251626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.266198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.266215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.281373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.281391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.295383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.295402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.310101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.310120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.324373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.324397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.338836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.338856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.652 [2024-12-16 22:46:22.353666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.652 [2024-12-16 22:46:22.353685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.364963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.364982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.379278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.379297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.394137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.394156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.409681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.409700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.420586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.420605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.435432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.435451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.449800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.449817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.465315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.465334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.479411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.479429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.494102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.494120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.509391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.509410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.522612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.522631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.538002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.538021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.553648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.553666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.567023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.567041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.582180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.582205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.596869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.596892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.912 [2024-12-16 22:46:22.610301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.912 [2024-12-16 22:46:22.610320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.171 [2024-12-16 22:46:22.625479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.625497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.639148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.639166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.654019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.654037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.669187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.669212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.683437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.683456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.697802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.697820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.713655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.713673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.725714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.725732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.739161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.739179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.753715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.753732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.769002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.769020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.783709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.783728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.798423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.798441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.813955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.813974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.829601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.829620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.843124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.843142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.857512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.857529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.172 [2024-12-16 22:46:22.868621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.172 [2024-12-16 22:46:22.868643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.883000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.883018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.897269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.897287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.909181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.909207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.922936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.922954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.937285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.937304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.950271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.950289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.965292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.965310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.979122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.431 [2024-12-16 22:46:22.979140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.431 [2024-12-16 22:46:22.993441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:22.993460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.004417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.004435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.018964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.018982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.033512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.033531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.045376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.045394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.059167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.059185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.074061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.074078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.089301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.089320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.103065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.103083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.117619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.117637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.432 [2024-12-16 22:46:23.128974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.432 [2024-12-16 22:46:23.128992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.142960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.142978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.157706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.157724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.173188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.173212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.186664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.186682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 16827.00 IOPS, 131.46 MiB/s [2024-12-16T21:46:23.392Z] [2024-12-16 22:46:23.201646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.201664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.212679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.212696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.227601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.227619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.242225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.242243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.256992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.257010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.271568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.271586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.285916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.285934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.301640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.301659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.314902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.314919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.329579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.329597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.343361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.343378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.358038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.358056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.373140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.373158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.691 [2024-12-16 22:46:23.387486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.691 [2024-12-16 22:46:23.387503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.402011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.402028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.416636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.416654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.430066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.430084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.445562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.445580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.458274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.458292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.473734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.473751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.489375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.489393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.502091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.502108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.517255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.517273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.529581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.529598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.543017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.543035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.557541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.557559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.570122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.570140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.582858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.582876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.597393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.597412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.610286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.610304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.622788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.622806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.637124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.637142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.951 [2024-12-16 22:46:23.651028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.951 [2024-12-16 22:46:23.651051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.665387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.665405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.678369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.678388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.693159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.693177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.707186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.707211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.722143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.722160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.737337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.737356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.751493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.751512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.765926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.765944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.781585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.781604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.792012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.792031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.806526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.806545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.821210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.821229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.834813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.834832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.849734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.849755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.864833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.864852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.879071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.879090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.210 [2024-12-16 22:46:23.893858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.210 [2024-12-16 22:46:23.893876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.211 [2024-12-16 22:46:23.909316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.211 [2024-12-16 22:46:23.909335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:23.921620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:23.921646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:23.935398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:23.935415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:23.950049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:23.950066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:23.965332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:23.965351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:23.979235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:23.979254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:23.994684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:23.994702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.008863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.008882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.023010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.023028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.037925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.037942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.053217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.053237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.066642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.066660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.081571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.081590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.093944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.093962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.107384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.107403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.122379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.122398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.137431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.137449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.148873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.148892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.470 [2024-12-16 22:46:24.162899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.470 [2024-12-16 22:46:24.162918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.177806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.177823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.192698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.192720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 16848.40 IOPS, 131.63 MiB/s [2024-12-16T21:46:24.431Z] [2024-12-16 22:46:24.204583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.204601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 00:40:34.730 Latency(us) 00:40:34.730 [2024-12-16T21:46:24.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.730 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:34.730 Nvme1n1 : 5.01 16849.99 131.64 0.00 0.00 7588.81 2044.10 13232.03 00:40:34.730 [2024-12-16T21:46:24.431Z] =================================================================================================================== 00:40:34.730 [2024-12-16T21:46:24.431Z] Total : 16849.99 131.64 0.00 0.00 7588.81 2044.10 13232.03 00:40:34.730 [2024-12-16 22:46:24.213423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.213438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.225426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.225441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.237442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.237459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.249427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.249443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.261429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.261442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.273423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.273435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.285423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.285436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.297427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.297443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.309434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.309454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.321440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.321451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.333427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.333438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.345422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.345434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 [2024-12-16 22:46:24.357422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.730 [2024-12-16 22:46:24.357432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (598469) - No such process 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 598469 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:34.730 delay0 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.730 22:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:34.989 [2024-12-16 22:46:24.458626] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:43.114 Initializing NVMe Controllers 00:40:43.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:43.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:43.115 Initialization complete. Launching workers. 00:40:43.115 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 259, failed: 22494 00:40:43.115 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22645, failed to submit 108 00:40:43.115 success 22543, unsuccessful 102, failed 0 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:43.115 rmmod nvme_tcp 00:40:43.115 rmmod nvme_fabrics 00:40:43.115 rmmod nvme_keyring 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 596744 ']' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 596744 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 596744 ']' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 596744 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596744 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596744' 00:40:43.115 killing process with pid 596744 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 596744 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 596744 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:43.115 22:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:44.494 00:40:44.494 real 0m32.071s 00:40:44.494 user 0m41.302s 00:40:44.494 sys 0m12.943s 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:44.494 ************************************ 00:40:44.494 END TEST nvmf_zcopy 00:40:44.494 ************************************ 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:44.494 ************************************ 00:40:44.494 START TEST nvmf_nmic 00:40:44.494 ************************************ 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:44.494 * Looking for test storage... 00:40:44.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:44.494 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:44.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.754 --rc genhtml_branch_coverage=1 00:40:44.754 --rc genhtml_function_coverage=1 00:40:44.754 --rc genhtml_legend=1 00:40:44.754 --rc geninfo_all_blocks=1 00:40:44.754 --rc geninfo_unexecuted_blocks=1 00:40:44.754 00:40:44.754 ' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:44.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.754 --rc genhtml_branch_coverage=1 00:40:44.754 --rc genhtml_function_coverage=1 00:40:44.754 --rc genhtml_legend=1 00:40:44.754 --rc geninfo_all_blocks=1 00:40:44.754 --rc geninfo_unexecuted_blocks=1 00:40:44.754 00:40:44.754 ' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:44.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.754 --rc genhtml_branch_coverage=1 00:40:44.754 --rc genhtml_function_coverage=1 00:40:44.754 --rc genhtml_legend=1 00:40:44.754 --rc geninfo_all_blocks=1 00:40:44.754 --rc geninfo_unexecuted_blocks=1 00:40:44.754 00:40:44.754 ' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:44.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:44.754 --rc genhtml_branch_coverage=1 00:40:44.754 --rc genhtml_function_coverage=1 00:40:44.754 --rc genhtml_legend=1 00:40:44.754 --rc geninfo_all_blocks=1 00:40:44.754 --rc geninfo_unexecuted_blocks=1 00:40:44.754 00:40:44.754 ' 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:44.754 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:44.755 22:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:51.327 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:51.327 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:51.327 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:51.328 Found net devices under 0000:af:00.0: cvl_0_0 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:51.328 Found net devices under 0000:af:00.1: cvl_0_1 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:51.328 22:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:51.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:51.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:40:51.328 00:40:51.328 --- 10.0.0.2 ping statistics --- 00:40:51.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.328 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:51.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:51.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:40:51.328 00:40:51.328 --- 10.0.0.1 ping statistics --- 00:40:51.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.328 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=603926 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 603926 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 603926 ']' 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.328 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.328 [2024-12-16 22:46:40.260402] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:51.328 [2024-12-16 22:46:40.261330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:51.328 [2024-12-16 22:46:40.261364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:51.328 [2024-12-16 22:46:40.337801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:51.328 [2024-12-16 22:46:40.361212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:51.328 [2024-12-16 22:46:40.361252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:51.328 [2024-12-16 22:46:40.361258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:51.328 [2024-12-16 22:46:40.361264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:51.329 [2024-12-16 22:46:40.361269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:51.329 [2024-12-16 22:46:40.362674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.329 [2024-12-16 22:46:40.362786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:51.329 [2024-12-16 22:46:40.362868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.329 [2024-12-16 22:46:40.362869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:51.329 [2024-12-16 22:46:40.425587] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:51.329 [2024-12-16 22:46:40.426262] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:51.329 [2024-12-16 22:46:40.426614] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:51.329 [2024-12-16 22:46:40.427025] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:51.329 [2024-12-16 22:46:40.427065] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 [2024-12-16 22:46:40.503677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 Malloc0 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 [2024-12-16 22:46:40.587865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:51.329 test case1: single bdev can't be used in multiple subsystems 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 [2024-12-16 22:46:40.615356] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:51.329 [2024-12-16 22:46:40.615377] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:51.329 [2024-12-16 22:46:40.615384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:51.329 request: 00:40:51.329 { 00:40:51.329 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:51.329 "namespace": { 00:40:51.329 "bdev_name": "Malloc0", 00:40:51.329 "no_auto_visible": false, 00:40:51.329 "hide_metadata": false 00:40:51.329 }, 00:40:51.329 "method": "nvmf_subsystem_add_ns", 00:40:51.329 "req_id": 1 00:40:51.329 } 00:40:51.329 Got JSON-RPC error response 00:40:51.329 response: 00:40:51.329 { 00:40:51.329 "code": -32602, 00:40:51.329 "message": "Invalid parameters" 00:40:51.329 } 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:51.329 Adding namespace failed - expected result. 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:51.329 test case2: host connect to nvmf target in multiple paths 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:51.329 [2024-12-16 22:46:40.627442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:51.329 22:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:51.589 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:51.589 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:51.589 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:51.589 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:51.589 22:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:53.494 22:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:53.494 [global] 00:40:53.494 thread=1 00:40:53.494 invalidate=1 00:40:53.494 rw=write 00:40:53.494 time_based=1 00:40:53.494 runtime=1 00:40:53.494 ioengine=libaio 00:40:53.494 direct=1 00:40:53.494 bs=4096 00:40:53.494 iodepth=1 00:40:53.494 norandommap=0 00:40:53.494 numjobs=1 00:40:53.494 00:40:53.494 verify_dump=1 00:40:53.494 verify_backlog=512 00:40:53.494 verify_state_save=0 00:40:53.494 do_verify=1 00:40:53.494 verify=crc32c-intel 00:40:53.494 [job0] 00:40:53.494 filename=/dev/nvme0n1 00:40:53.751 Could not set queue depth (nvme0n1) 00:40:54.010 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:54.010 fio-3.35 00:40:54.010 Starting 1 thread 00:40:54.946 00:40:54.947 job0: (groupid=0, jobs=1): err= 0: pid=604553: Mon Dec 16 22:46:44 2024 00:40:54.947 read: IOPS=2497, BW=9990KiB/s (10.2MB/s)(9.98MiB/1023msec) 00:40:54.947 slat (nsec): min=6497, max=33361, avg=7422.13, stdev=1127.53 00:40:54.947 clat (usec): min=176, max=40929, avg=243.03, stdev=1393.42 00:40:54.947 lat (usec): min=184, max=40936, avg=250.45, stdev=1393.62 00:40:54.947 clat percentiles (usec): 00:40:54.947 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 186], 00:40:54.947 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 190], 60.00th=[ 192], 00:40:54.947 | 70.00th=[ 194], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 225], 00:40:54.947 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[40633], 99.95th=[40633], 00:40:54.947 | 99.99th=[41157] 00:40:54.947 write: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1023msec); 0 zone resets 00:40:54.947 slat (nsec): min=8552, max=40327, avg=10321.89, stdev=1106.76 00:40:54.947 clat (usec): min=123, max=375, avg=134.54, stdev= 8.19 00:40:54.947 lat (usec): min=133, max=415, avg=144.86, stdev= 8.63 00:40:54.947 clat percentiles (usec): 00:40:54.947 | 1.00th=[ 128], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 131], 00:40:54.947 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 135], 00:40:54.947 | 70.00th=[ 137], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 143], 00:40:54.947 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 293], 00:40:54.947 | 99.99th=[ 375] 00:40:54.947 bw ( KiB/s): min= 8192, max=12288, per=100.00%, avg=10240.00, stdev=2896.31, samples=2 00:40:54.947 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:40:54.947 lat (usec) : 250=98.71%, 500=1.23% 00:40:54.947 lat (msec) : 50=0.06% 00:40:54.947 cpu : usr=2.64%, sys=4.40%, ctx=5115, majf=0, minf=1 00:40:54.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:54.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:54.947 issued rwts: total=2555,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:54.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:54.947 00:40:54.947 Run status group 0 (all jobs): 00:40:54.947 READ: bw=9990KiB/s (10.2MB/s), 9990KiB/s-9990KiB/s (10.2MB/s-10.2MB/s), io=9.98MiB (10.5MB), run=1023-1023msec 00:40:54.947 WRITE: bw=9.77MiB/s (10.2MB/s), 9.77MiB/s-9.77MiB/s (10.2MB/s-10.2MB/s), io=10.0MiB (10.5MB), run=1023-1023msec 00:40:54.947 00:40:54.947 Disk stats (read/write): 00:40:54.947 nvme0n1: ios=2598/2560, merge=0/0, ticks=509/338, in_queue=847, util=90.98% 00:40:54.947 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:55.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:55.206 rmmod nvme_tcp 00:40:55.206 rmmod nvme_fabrics 00:40:55.206 rmmod nvme_keyring 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 603926 ']' 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 603926 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 603926 ']' 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 603926 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:55.206 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603926 00:40:55.466 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:55.466 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:55.466 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603926' 00:40:55.466 killing process with pid 603926 00:40:55.466 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 603926 00:40:55.466 22:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 603926 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.466 22:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:58.001 00:40:58.001 real 0m13.091s 00:40:58.001 user 0m23.701s 00:40:58.001 sys 0m6.154s 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:58.001 ************************************ 00:40:58.001 END TEST nvmf_nmic 00:40:58.001 ************************************ 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:58.001 ************************************ 00:40:58.001 START TEST nvmf_fio_target 00:40:58.001 ************************************ 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:58.001 * Looking for test storage... 00:40:58.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:58.001 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:58.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.002 --rc genhtml_branch_coverage=1 00:40:58.002 --rc genhtml_function_coverage=1 00:40:58.002 --rc genhtml_legend=1 00:40:58.002 --rc geninfo_all_blocks=1 00:40:58.002 --rc geninfo_unexecuted_blocks=1 00:40:58.002 00:40:58.002 ' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:58.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.002 --rc genhtml_branch_coverage=1 00:40:58.002 --rc genhtml_function_coverage=1 00:40:58.002 --rc genhtml_legend=1 00:40:58.002 --rc geninfo_all_blocks=1 00:40:58.002 --rc geninfo_unexecuted_blocks=1 00:40:58.002 00:40:58.002 ' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:58.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.002 --rc genhtml_branch_coverage=1 00:40:58.002 --rc genhtml_function_coverage=1 00:40:58.002 --rc genhtml_legend=1 00:40:58.002 --rc geninfo_all_blocks=1 00:40:58.002 --rc geninfo_unexecuted_blocks=1 00:40:58.002 00:40:58.002 ' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:58.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:58.002 --rc genhtml_branch_coverage=1 00:40:58.002 --rc genhtml_function_coverage=1 00:40:58.002 --rc genhtml_legend=1 00:40:58.002 --rc geninfo_all_blocks=1 00:40:58.002 --rc geninfo_unexecuted_blocks=1 00:40:58.002 00:40:58.002 ' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:58.002 22:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:04.572 22:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:04.572 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:04.572 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:04.572 Found net devices under 0000:af:00.0: cvl_0_0 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:04.572 Found net devices under 0000:af:00.1: cvl_0_1 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.572 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:04.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:41:04.573 00:41:04.573 --- 10.0.0.2 ping statistics --- 00:41:04.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.573 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:41:04.573 00:41:04.573 --- 10.0.0.1 ping statistics --- 00:41:04.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.573 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=608241 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 608241 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 608241 ']' 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:04.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:04.573 [2024-12-16 22:46:53.348464] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:04.573 [2024-12-16 22:46:53.349379] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:04.573 [2024-12-16 22:46:53.349408] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:04.573 [2024-12-16 22:46:53.427559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:04.573 [2024-12-16 22:46:53.450381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:04.573 [2024-12-16 22:46:53.450418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:04.573 [2024-12-16 22:46:53.450425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:04.573 [2024-12-16 22:46:53.450431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:04.573 [2024-12-16 22:46:53.450436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:04.573 [2024-12-16 22:46:53.451716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:04.573 [2024-12-16 22:46:53.451829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:04.573 [2024-12-16 22:46:53.451936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.573 [2024-12-16 22:46:53.451937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:04.573 [2024-12-16 22:46:53.514594] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:04.573 [2024-12-16 22:46:53.515468] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:04.573 [2024-12-16 22:46:53.515672] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:04.573 [2024-12-16 22:46:53.516081] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:04.573 [2024-12-16 22:46:53.516116] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:04.573 [2024-12-16 22:46:53.748646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:04.573 22:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:04.573 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:04.573 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:04.573 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:04.573 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:04.832 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:04.832 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:05.090 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:05.091 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:05.349 22:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:05.349 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:05.349 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:05.615 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:05.615 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:05.876 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:05.876 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:06.134 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:06.134 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:06.134 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:06.392 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:06.392 22:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:06.650 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:06.908 [2024-12-16 22:46:56.352599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.908 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:06.908 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:07.167 22:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:07.426 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:07.426 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:07.426 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:07.426 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:07.426 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:07.426 22:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:09.959 22:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:09.959 [global] 00:41:09.959 thread=1 00:41:09.959 invalidate=1 00:41:09.959 rw=write 00:41:09.959 time_based=1 00:41:09.959 runtime=1 00:41:09.959 ioengine=libaio 00:41:09.959 direct=1 00:41:09.959 bs=4096 00:41:09.959 iodepth=1 00:41:09.959 norandommap=0 00:41:09.959 numjobs=1 00:41:09.959 00:41:09.959 verify_dump=1 00:41:09.959 verify_backlog=512 00:41:09.959 verify_state_save=0 00:41:09.959 do_verify=1 00:41:09.959 verify=crc32c-intel 00:41:09.959 [job0] 00:41:09.959 filename=/dev/nvme0n1 00:41:09.959 [job1] 00:41:09.959 filename=/dev/nvme0n2 00:41:09.959 [job2] 00:41:09.959 filename=/dev/nvme0n3 00:41:09.959 [job3] 00:41:09.959 filename=/dev/nvme0n4 00:41:09.959 Could not set queue depth (nvme0n1) 00:41:09.959 Could not set queue depth (nvme0n2) 00:41:09.959 Could not set queue depth (nvme0n3) 00:41:09.959 Could not set queue depth (nvme0n4) 00:41:09.959 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.959 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.959 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.959 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.959 fio-3.35 00:41:09.959 Starting 4 threads 00:41:11.337 00:41:11.337 job0: (groupid=0, jobs=1): err= 0: pid=609331: Mon Dec 16 22:47:00 2024 00:41:11.337 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:41:11.337 slat (nsec): min=10025, max=24607, avg=22581.86, stdev=2886.42 00:41:11.337 clat (usec): min=40824, max=41059, avg=40961.84, stdev=62.08 00:41:11.337 lat (usec): min=40847, max=41083, avg=40984.42, stdev=63.00 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:11.337 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:11.337 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:11.337 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:11.337 | 99.99th=[41157] 00:41:11.337 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:41:11.337 slat (nsec): min=10854, max=46660, avg=12236.64, stdev=2067.09 00:41:11.337 clat (usec): min=143, max=355, avg=203.27, stdev=30.42 00:41:11.337 lat (usec): min=160, max=368, avg=215.50, stdev=30.67 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:41:11.337 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:41:11.337 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 241], 95.00th=[ 273], 00:41:11.337 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 355], 99.95th=[ 355], 00:41:11.337 | 99.99th=[ 355] 00:41:11.337 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:41:11.337 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:11.337 lat (usec) : 250=88.95%, 500=6.93% 00:41:11.337 lat (msec) : 50=4.12% 00:41:11.337 cpu : usr=0.39%, sys=0.99%, ctx=535, majf=0, minf=1 00:41:11.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:11.337 job1: (groupid=0, jobs=1): err= 0: pid=609332: Mon Dec 16 22:47:00 2024 00:41:11.337 read: IOPS=1024, BW=4099KiB/s (4197kB/s)(4140KiB/1010msec) 00:41:11.337 slat (nsec): min=6979, max=39818, avg=8488.73, stdev=1649.64 00:41:11.337 clat (usec): min=196, max=41130, avg=664.21, stdev=4179.90 00:41:11.337 lat (usec): min=208, max=41140, avg=672.70, stdev=4180.29 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:41:11.337 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:41:11.337 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 260], 00:41:11.337 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:11.337 | 99.99th=[41157] 00:41:11.337 write: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec); 0 zone resets 00:41:11.337 slat (nsec): min=9852, max=41437, avg=11779.78, stdev=1893.26 00:41:11.337 clat (usec): min=133, max=3611, avg=187.38, stdev=133.33 00:41:11.337 lat (usec): min=144, max=3624, avg=199.16, stdev=133.75 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:41:11.337 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:41:11.337 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 245], 00:41:11.337 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 3425], 99.95th=[ 3621], 00:41:11.337 | 99.99th=[ 3621] 00:41:11.337 bw ( KiB/s): min= 2320, max= 9968, per=31.14%, avg=6144.00, stdev=5407.95, samples=2 00:41:11.337 iops : min= 580, max= 2492, avg=1536.00, stdev=1351.99, samples=2 00:41:11.337 lat (usec) : 250=93.23%, 500=6.18% 00:41:11.337 lat (msec) : 4=0.16%, 50=0.43% 00:41:11.337 cpu : usr=2.28%, sys=4.06%, ctx=2571, majf=0, minf=1 00:41:11.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:11.337 job2: (groupid=0, jobs=1): err= 0: pid=609333: Mon Dec 16 22:47:00 2024 00:41:11.337 read: IOPS=86, BW=347KiB/s (355kB/s)(360KiB/1038msec) 00:41:11.337 slat (nsec): min=7613, max=25201, avg=13227.01, stdev=6723.32 00:41:11.337 clat (usec): min=400, max=41091, avg=10331.54, stdev=17523.14 00:41:11.337 lat (usec): min=412, max=41114, avg=10344.76, stdev=17528.47 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 400], 5.00th=[ 408], 10.00th=[ 408], 20.00th=[ 412], 00:41:11.337 | 30.00th=[ 416], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 429], 00:41:11.337 | 70.00th=[ 441], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:11.337 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:11.337 | 99.99th=[41157] 00:41:11.337 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:41:11.337 slat (nsec): min=9628, max=39817, avg=10789.35, stdev=1873.27 00:41:11.337 clat (usec): min=142, max=286, avg=194.60, stdev=18.18 00:41:11.337 lat (usec): min=153, max=297, avg=205.39, stdev=18.31 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 182], 00:41:11.337 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:41:11.337 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 223], 00:41:11.337 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 289], 00:41:11.337 | 99.99th=[ 289] 00:41:11.337 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:41:11.337 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:11.337 lat (usec) : 250=84.39%, 500=11.96% 00:41:11.337 lat (msec) : 50=3.65% 00:41:11.337 cpu : usr=0.10%, sys=0.77%, ctx=604, majf=0, minf=1 00:41:11.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 issued rwts: total=90,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:11.337 job3: (groupid=0, jobs=1): err= 0: pid=609334: Mon Dec 16 22:47:00 2024 00:41:11.337 read: IOPS=2251, BW=9007KiB/s (9223kB/s)(9016KiB/1001msec) 00:41:11.337 slat (nsec): min=6460, max=26390, avg=7450.50, stdev=915.08 00:41:11.337 clat (usec): min=194, max=451, avg=236.47, stdev=22.91 00:41:11.337 lat (usec): min=202, max=459, avg=243.92, stdev=23.03 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:41:11.337 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:41:11.337 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 258], 00:41:11.337 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 433], 99.95th=[ 437], 00:41:11.337 | 99.99th=[ 453] 00:41:11.337 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:11.337 slat (nsec): min=9443, max=38991, avg=10674.70, stdev=1625.88 00:41:11.337 clat (usec): min=117, max=336, avg=161.11, stdev=30.04 00:41:11.337 lat (usec): min=133, max=375, avg=171.79, stdev=30.26 00:41:11.337 clat percentiles (usec): 00:41:11.337 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:41:11.337 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 153], 60.00th=[ 159], 00:41:11.337 | 70.00th=[ 165], 80.00th=[ 184], 90.00th=[ 202], 95.00th=[ 241], 00:41:11.337 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 314], 00:41:11.337 | 99.99th=[ 338] 00:41:11.337 bw ( KiB/s): min= 9672, max= 9672, per=49.02%, avg=9672.00, stdev= 0.00, samples=1 00:41:11.337 iops : min= 2418, max= 2418, avg=2418.00, stdev= 0.00, samples=1 00:41:11.337 lat (usec) : 250=90.20%, 500=9.80% 00:41:11.337 cpu : usr=2.20%, sys=4.60%, ctx=4814, majf=0, minf=1 00:41:11.337 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:11.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.337 issued rwts: total=2254,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:11.337 00:41:11.337 Run status group 0 (all jobs): 00:41:11.337 READ: bw=12.8MiB/s (13.4MB/s), 86.8KiB/s-9007KiB/s (88.9kB/s-9223kB/s), io=13.3MiB (13.9MB), run=1001-1038msec 00:41:11.337 WRITE: bw=19.3MiB/s (20.2MB/s), 1973KiB/s-9.99MiB/s (2020kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1038msec 00:41:11.337 00:41:11.337 Disk stats (read/write): 00:41:11.337 nvme0n1: ios=44/512, merge=0/0, ticks=1723/95, in_queue=1818, util=98.20% 00:41:11.337 nvme0n2: ios=1046/1536, merge=0/0, ticks=526/282, in_queue=808, util=87.09% 00:41:11.337 nvme0n3: ios=63/512, merge=0/0, ticks=1732/92, in_queue=1824, util=98.44% 00:41:11.337 nvme0n4: ios=1939/2048, merge=0/0, ticks=457/332, in_queue=789, util=89.72% 00:41:11.337 22:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:11.337 [global] 00:41:11.337 thread=1 00:41:11.337 invalidate=1 00:41:11.337 rw=randwrite 00:41:11.337 time_based=1 00:41:11.337 runtime=1 00:41:11.337 ioengine=libaio 00:41:11.337 direct=1 00:41:11.337 bs=4096 00:41:11.337 iodepth=1 00:41:11.337 norandommap=0 00:41:11.337 numjobs=1 00:41:11.337 00:41:11.337 verify_dump=1 00:41:11.337 verify_backlog=512 00:41:11.337 verify_state_save=0 00:41:11.337 do_verify=1 00:41:11.337 verify=crc32c-intel 00:41:11.338 [job0] 00:41:11.338 filename=/dev/nvme0n1 00:41:11.338 [job1] 00:41:11.338 filename=/dev/nvme0n2 00:41:11.338 [job2] 00:41:11.338 filename=/dev/nvme0n3 00:41:11.338 [job3] 00:41:11.338 filename=/dev/nvme0n4 00:41:11.338 Could not set queue depth (nvme0n1) 00:41:11.338 Could not set queue depth (nvme0n2) 00:41:11.338 Could not set queue depth (nvme0n3) 00:41:11.338 Could not set queue depth (nvme0n4) 00:41:11.338 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.338 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.338 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.338 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.338 fio-3.35 00:41:11.338 Starting 4 threads 00:41:12.715 00:41:12.715 job0: (groupid=0, jobs=1): err= 0: pid=609702: Mon Dec 16 22:47:02 2024 00:41:12.715 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:41:12.715 slat (nsec): min=7126, max=25187, avg=8312.71, stdev=1507.74 00:41:12.715 clat (usec): min=183, max=41992, avg=424.07, stdev=2751.89 00:41:12.715 lat (usec): min=192, max=42000, avg=432.38, stdev=2752.29 00:41:12.715 clat percentiles (usec): 00:41:12.715 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 215], 00:41:12.715 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 251], 00:41:12.715 | 70.00th=[ 255], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 269], 00:41:12.715 | 99.00th=[ 388], 99.50th=[ 523], 99.90th=[41157], 99.95th=[42206], 00:41:12.715 | 99.99th=[42206] 00:41:12.715 write: IOPS=1918, BW=7672KiB/s (7856kB/s)(7680KiB/1001msec); 0 zone resets 00:41:12.715 slat (nsec): min=10255, max=49283, avg=11750.32, stdev=2217.62 00:41:12.715 clat (usec): min=127, max=319, avg=157.80, stdev=39.73 00:41:12.715 lat (usec): min=138, max=367, avg=169.55, stdev=40.25 00:41:12.715 clat percentiles (usec): 00:41:12.715 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 135], 00:41:12.715 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:41:12.715 | 70.00th=[ 159], 80.00th=[ 182], 90.00th=[ 208], 95.00th=[ 265], 00:41:12.715 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 322], 00:41:12.715 | 99.99th=[ 322] 00:41:12.715 bw ( KiB/s): min= 4928, max= 4928, per=27.80%, avg=4928.00, stdev= 0.00, samples=1 00:41:12.715 iops : min= 1232, max= 1232, avg=1232.00, stdev= 0.00, samples=1 00:41:12.715 lat (usec) : 250=78.33%, 500=21.41%, 750=0.06% 00:41:12.715 lat (msec) : 50=0.20% 00:41:12.715 cpu : usr=3.20%, sys=5.30%, ctx=3457, majf=0, minf=1 00:41:12.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.715 issued rwts: total=1536,1920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:12.715 job1: (groupid=0, jobs=1): err= 0: pid=609703: Mon Dec 16 22:47:02 2024 00:41:12.715 read: IOPS=677, BW=2712KiB/s (2777kB/s)(2736KiB/1009msec) 00:41:12.715 slat (nsec): min=6658, max=23504, avg=7978.93, stdev=2455.38 00:41:12.715 clat (usec): min=192, max=41476, avg=1210.89, stdev=6144.16 00:41:12.715 lat (usec): min=199, max=41483, avg=1218.87, stdev=6144.99 00:41:12.715 clat percentiles (usec): 00:41:12.715 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 221], 00:41:12.715 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 235], 00:41:12.715 | 70.00th=[ 241], 80.00th=[ 269], 90.00th=[ 457], 95.00th=[ 494], 00:41:12.715 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:41:12.715 | 99.99th=[41681] 00:41:12.715 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:41:12.715 slat (nsec): min=9503, max=38374, avg=10830.54, stdev=1592.77 00:41:12.715 clat (usec): min=126, max=299, avg=155.71, stdev=18.09 00:41:12.715 lat (usec): min=136, max=338, avg=166.54, stdev=18.68 00:41:12.715 clat percentiles (usec): 00:41:12.715 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:41:12.715 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:41:12.715 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 186], 00:41:12.715 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 239], 99.95th=[ 302], 00:41:12.715 | 99.99th=[ 302] 00:41:12.715 bw ( KiB/s): min= 8192, max= 8192, per=46.22%, avg=8192.00, stdev= 0.00, samples=1 00:41:12.715 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:12.715 lat (usec) : 250=90.40%, 500=8.08%, 750=0.53% 00:41:12.715 lat (msec) : 2=0.06%, 50=0.94% 00:41:12.715 cpu : usr=1.09%, sys=1.49%, ctx=1709, majf=0, minf=1 00:41:12.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.715 issued rwts: total=684,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:12.715 job2: (groupid=0, jobs=1): err= 0: pid=609704: Mon Dec 16 22:47:02 2024 00:41:12.715 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:41:12.715 slat (nsec): min=9537, max=24963, avg=20587.73, stdev=4905.52 00:41:12.715 clat (usec): min=40363, max=41230, avg=40947.52, stdev=152.01 00:41:12.715 lat (usec): min=40375, max=41239, avg=40968.11, stdev=152.36 00:41:12.715 clat percentiles (usec): 00:41:12.715 | 1.00th=[40109], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:12.715 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:12.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:12.715 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:12.715 | 99.99th=[41157] 00:41:12.715 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:41:12.715 slat (nsec): min=11535, max=46477, avg=13794.15, stdev=2419.16 00:41:12.715 clat (usec): min=158, max=268, avg=195.66, stdev=16.70 00:41:12.716 lat (usec): min=171, max=291, avg=209.45, stdev=17.20 00:41:12.716 clat percentiles (usec): 00:41:12.716 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:41:12.716 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:41:12.716 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 227], 00:41:12.716 | 99.00th=[ 239], 99.50th=[ 262], 99.90th=[ 269], 99.95th=[ 269], 00:41:12.716 | 99.99th=[ 269] 00:41:12.716 bw ( KiB/s): min= 4096, max= 4096, per=23.11%, avg=4096.00, stdev= 0.00, samples=1 00:41:12.716 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:12.716 lat (usec) : 250=95.32%, 500=0.56% 00:41:12.716 lat (msec) : 50=4.12% 00:41:12.716 cpu : usr=0.99%, sys=0.50%, ctx=535, majf=0, minf=1 00:41:12.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.716 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:12.716 job3: (groupid=0, jobs=1): err= 0: pid=609705: Mon Dec 16 22:47:02 2024 00:41:12.716 read: IOPS=518, BW=2076KiB/s (2126kB/s)(2080KiB/1002msec) 00:41:12.716 slat (nsec): min=6675, max=26183, avg=8089.18, stdev=2351.59 00:41:12.716 clat (usec): min=211, max=41099, avg=1546.76, stdev=6891.89 00:41:12.716 lat (usec): min=219, max=41109, avg=1554.85, stdev=6893.36 00:41:12.716 clat percentiles (usec): 00:41:12.716 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 273], 20.00th=[ 277], 00:41:12.716 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:41:12.716 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 437], 95.00th=[ 449], 00:41:12.716 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:12.716 | 99.99th=[41157] 00:41:12.716 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:41:12.716 slat (nsec): min=9357, max=40621, avg=11024.44, stdev=2013.14 00:41:12.716 clat (usec): min=123, max=1469, avg=173.65, stdev=56.83 00:41:12.716 lat (usec): min=133, max=1480, avg=184.68, stdev=57.08 00:41:12.716 clat percentiles (usec): 00:41:12.716 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:41:12.716 | 30.00th=[ 147], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 172], 00:41:12.716 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 221], 95.00th=[ 277], 00:41:12.716 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 347], 99.95th=[ 1467], 00:41:12.716 | 99.99th=[ 1467] 00:41:12.716 bw ( KiB/s): min= 4096, max= 4096, per=23.11%, avg=4096.00, stdev= 0.00, samples=2 00:41:12.716 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:41:12.716 lat (usec) : 250=61.53%, 500=37.31%, 750=0.06% 00:41:12.716 lat (msec) : 2=0.06%, 50=1.04% 00:41:12.716 cpu : usr=0.30%, sys=2.00%, ctx=1545, majf=0, minf=1 00:41:12.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:12.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:12.716 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:12.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:12.716 00:41:12.716 Run status group 0 (all jobs): 00:41:12.716 READ: bw=10.7MiB/s (11.2MB/s), 87.0KiB/s-6138KiB/s (89.1kB/s-6285kB/s), io=10.8MiB (11.3MB), run=1001-1011msec 00:41:12.716 WRITE: bw=17.3MiB/s (18.1MB/s), 2026KiB/s-7672KiB/s (2074kB/s-7856kB/s), io=17.5MiB (18.3MB), run=1001-1011msec 00:41:12.716 00:41:12.716 Disk stats (read/write): 00:41:12.716 nvme0n1: ios=1255/1536, merge=0/0, ticks=1479/228, in_queue=1707, util=97.29% 00:41:12.716 nvme0n2: ios=711/1024, merge=0/0, ticks=1532/152, in_queue=1684, util=97.34% 00:41:12.716 nvme0n3: ios=75/512, merge=0/0, ticks=1301/100, in_queue=1401, util=97.48% 00:41:12.716 nvme0n4: ios=540/1024, merge=0/0, ticks=1586/178, in_queue=1764, util=97.46% 00:41:12.716 22:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:12.716 [global] 00:41:12.716 thread=1 00:41:12.716 invalidate=1 00:41:12.716 rw=write 00:41:12.716 time_based=1 00:41:12.716 runtime=1 00:41:12.716 ioengine=libaio 00:41:12.716 direct=1 00:41:12.716 bs=4096 00:41:12.716 iodepth=128 00:41:12.716 norandommap=0 00:41:12.716 numjobs=1 00:41:12.716 00:41:12.716 verify_dump=1 00:41:12.716 verify_backlog=512 00:41:12.716 verify_state_save=0 00:41:12.716 do_verify=1 00:41:12.716 verify=crc32c-intel 00:41:12.716 [job0] 00:41:12.716 filename=/dev/nvme0n1 00:41:12.716 [job1] 00:41:12.716 filename=/dev/nvme0n2 00:41:12.716 [job2] 00:41:12.716 filename=/dev/nvme0n3 00:41:12.716 [job3] 00:41:12.716 filename=/dev/nvme0n4 00:41:12.716 Could not set queue depth (nvme0n1) 00:41:12.716 Could not set queue depth (nvme0n2) 00:41:12.716 Could not set queue depth (nvme0n3) 00:41:12.716 Could not set queue depth (nvme0n4) 00:41:12.974 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.974 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.974 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.974 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.974 fio-3.35 00:41:12.974 Starting 4 threads 00:41:14.384 00:41:14.384 job0: (groupid=0, jobs=1): err= 0: pid=610063: Mon Dec 16 22:47:03 2024 00:41:14.384 read: IOPS=4991, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1003msec) 00:41:14.384 slat (nsec): min=1439, max=11520k, avg=96693.40, stdev=600495.78 00:41:14.384 clat (usec): min=2253, max=35987, avg=13327.25, stdev=5701.43 00:41:14.384 lat (usec): min=2257, max=37231, avg=13423.94, stdev=5728.15 00:41:14.384 clat percentiles (usec): 00:41:14.384 | 1.00th=[ 3720], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9896], 00:41:14.384 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:41:14.384 | 70.00th=[13173], 80.00th=[16581], 90.00th=[22152], 95.00th=[26870], 00:41:14.384 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:41:14.384 | 99.99th=[35914] 00:41:14.384 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:41:14.384 slat (usec): min=2, max=18327, avg=87.84, stdev=573.73 00:41:14.384 clat (usec): min=401, max=64800, avg=11785.24, stdev=6138.80 00:41:14.384 lat (usec): min=409, max=64809, avg=11873.08, stdev=6184.00 00:41:14.384 clat percentiles (usec): 00:41:14.384 | 1.00th=[ 627], 5.00th=[ 5604], 10.00th=[ 8717], 20.00th=[10028], 00:41:14.384 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[11076], 00:41:14.384 | 70.00th=[11469], 80.00th=[11863], 90.00th=[15401], 95.00th=[20317], 00:41:14.384 | 99.00th=[42206], 99.50th=[45351], 99.90th=[54264], 99.95th=[54264], 00:41:14.384 | 99.99th=[64750] 00:41:14.384 bw ( KiB/s): min=20480, max=20480, per=30.77%, avg=20480.00, stdev= 0.00, samples=2 00:41:14.384 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:41:14.384 lat (usec) : 500=0.10%, 750=1.21%, 1000=0.07% 00:41:14.384 lat (msec) : 2=0.36%, 4=0.97%, 10=17.03%, 20=71.36%, 50=8.85% 00:41:14.384 lat (msec) : 100=0.06% 00:41:14.384 cpu : usr=3.59%, sys=4.89%, ctx=586, majf=0, minf=1 00:41:14.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:14.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:14.384 issued rwts: total=5006,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:14.384 job1: (groupid=0, jobs=1): err= 0: pid=610065: Mon Dec 16 22:47:03 2024 00:41:14.384 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(9.88MiB/1005msec) 00:41:14.384 slat (nsec): min=1064, max=24953k, avg=247388.17, stdev=1520089.59 00:41:14.384 clat (usec): min=298, max=70477, avg=31604.02, stdev=12803.82 00:41:14.384 lat (usec): min=6245, max=70482, avg=31851.41, stdev=12815.47 00:41:14.384 clat percentiles (usec): 00:41:14.384 | 1.00th=[ 6456], 5.00th=[11076], 10.00th=[13304], 20.00th=[19268], 00:41:14.384 | 30.00th=[25560], 40.00th=[30016], 50.00th=[32113], 60.00th=[33817], 00:41:14.384 | 70.00th=[37487], 80.00th=[40633], 90.00th=[44303], 95.00th=[55313], 00:41:14.384 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:41:14.384 | 99.99th=[70779] 00:41:14.384 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:41:14.384 slat (nsec): min=1796, max=9378.9k, avg=142663.43, stdev=736548.40 00:41:14.384 clat (usec): min=5191, max=39700, avg=18455.62, stdev=8787.74 00:41:14.384 lat (usec): min=5199, max=39711, avg=18598.28, stdev=8818.64 00:41:14.384 clat percentiles (usec): 00:41:14.384 | 1.00th=[ 5997], 5.00th=[ 6521], 10.00th=[ 7832], 20.00th=[ 8586], 00:41:14.384 | 30.00th=[11731], 40.00th=[14746], 50.00th=[19530], 60.00th=[21103], 00:41:14.384 | 70.00th=[22938], 80.00th=[26346], 90.00th=[30278], 95.00th=[33162], 00:41:14.384 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:41:14.384 | 99.99th=[39584] 00:41:14.384 bw ( KiB/s): min= 8192, max=12288, per=15.38%, avg=10240.00, stdev=2896.31, samples=2 00:41:14.384 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:41:14.384 lat (usec) : 500=0.02% 00:41:14.384 lat (msec) : 10=15.19%, 20=23.54%, 50=57.35%, 100=3.91% 00:41:14.384 cpu : usr=1.89%, sys=2.09%, ctx=333, majf=0, minf=1 00:41:14.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:14.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:14.385 issued rwts: total=2530,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:14.385 job2: (groupid=0, jobs=1): err= 0: pid=610066: Mon Dec 16 22:47:03 2024 00:41:14.385 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:41:14.385 slat (nsec): min=1315, max=9346.0k, avg=104358.34, stdev=716699.35 00:41:14.385 clat (usec): min=3890, max=30939, avg=13554.67, stdev=3938.12 00:41:14.385 lat (usec): min=3907, max=30948, avg=13659.03, stdev=3986.89 00:41:14.385 clat percentiles (usec): 00:41:14.385 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 8848], 20.00th=[10159], 00:41:14.385 | 30.00th=[11469], 40.00th=[11994], 50.00th=[13304], 60.00th=[14615], 00:41:14.385 | 70.00th=[16319], 80.00th=[17171], 90.00th=[18220], 95.00th=[19530], 00:41:14.385 | 99.00th=[21365], 99.50th=[25560], 99.90th=[30802], 99.95th=[31065], 00:41:14.385 | 99.99th=[31065] 00:41:14.385 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1005msec); 0 zone resets 00:41:14.385 slat (usec): min=2, max=40996, avg=147.45, stdev=1595.29 00:41:14.385 clat (usec): min=699, max=210294, avg=15843.65, stdev=16254.91 00:41:14.385 lat (usec): min=723, max=210320, avg=15991.10, stdev=16560.08 00:41:14.385 clat percentiles (msec): 00:41:14.385 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 11], 00:41:14.385 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:41:14.385 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 24], 95.00th=[ 39], 00:41:14.385 | 99.00th=[ 83], 99.50th=[ 124], 99.90th=[ 205], 99.95th=[ 211], 00:41:14.385 | 99.99th=[ 211] 00:41:14.385 bw ( KiB/s): min=12288, max=18088, per=22.82%, avg=15188.00, stdev=4101.22, samples=2 00:41:14.385 iops : min= 3072, max= 4522, avg=3797.00, stdev=1025.30, samples=2 00:41:14.385 lat (usec) : 750=0.04%, 1000=0.09% 00:41:14.385 lat (msec) : 4=0.43%, 10=18.23%, 20=73.06%, 50=7.30%, 100=0.43% 00:41:14.385 lat (msec) : 250=0.43% 00:41:14.385 cpu : usr=2.39%, sys=3.78%, ctx=336, majf=0, minf=1 00:41:14.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:14.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:14.385 issued rwts: total=3584,3924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:14.385 job3: (groupid=0, jobs=1): err= 0: pid=610067: Mon Dec 16 22:47:03 2024 00:41:14.385 read: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1003msec) 00:41:14.385 slat (nsec): min=1310, max=19171k, avg=104577.15, stdev=793795.59 00:41:14.385 clat (usec): min=1889, max=36900, avg=13394.66, stdev=4605.36 00:41:14.385 lat (usec): min=3148, max=36924, avg=13499.24, stdev=4648.61 00:41:14.385 clat percentiles (usec): 00:41:14.385 | 1.00th=[ 5211], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[10159], 00:41:14.385 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12518], 60.00th=[13304], 00:41:14.385 | 70.00th=[15008], 80.00th=[15926], 90.00th=[18744], 95.00th=[22414], 00:41:14.385 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:41:14.385 | 99.99th=[36963] 00:41:14.385 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:41:14.385 slat (nsec): min=1937, max=12447k, avg=86960.92, stdev=632144.26 00:41:14.385 clat (usec): min=638, max=34853, avg=12434.38, stdev=5134.81 00:41:14.385 lat (usec): min=647, max=34857, avg=12521.34, stdev=5183.62 00:41:14.385 clat percentiles (usec): 00:41:14.385 | 1.00th=[ 2900], 5.00th=[ 4883], 10.00th=[ 6259], 20.00th=[ 8586], 00:41:14.385 | 30.00th=[ 9896], 40.00th=[11469], 50.00th=[12387], 60.00th=[12649], 00:41:14.385 | 70.00th=[13566], 80.00th=[15139], 90.00th=[19792], 95.00th=[21365], 00:41:14.385 | 99.00th=[32113], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:41:14.385 | 99.99th=[34866] 00:41:14.385 bw ( KiB/s): min=20480, max=20480, per=30.77%, avg=20480.00, stdev= 0.00, samples=2 00:41:14.385 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:41:14.385 lat (usec) : 750=0.07%, 1000=0.04% 00:41:14.385 lat (msec) : 2=0.28%, 4=1.83%, 10=22.91%, 20=66.43%, 50=8.42% 00:41:14.385 cpu : usr=3.49%, sys=5.89%, ctx=277, majf=0, minf=1 00:41:14.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:14.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:14.385 issued rwts: total=4747,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:14.385 00:41:14.385 Run status group 0 (all jobs): 00:41:14.385 READ: bw=61.7MiB/s (64.7MB/s), 9.83MiB/s-19.5MiB/s (10.3MB/s-20.4MB/s), io=62.0MiB (65.0MB), run=1003-1005msec 00:41:14.385 WRITE: bw=65.0MiB/s (68.2MB/s), 9.95MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=65.3MiB (68.5MB), run=1003-1005msec 00:41:14.385 00:41:14.385 Disk stats (read/write): 00:41:14.385 nvme0n1: ios=4148/4210, merge=0/0, ticks=29646/26157, in_queue=55803, util=98.30% 00:41:14.385 nvme0n2: ios=2075/2497, merge=0/0, ticks=16001/11575, in_queue=27576, util=87.51% 00:41:14.385 nvme0n3: ios=3108/3177, merge=0/0, ticks=22647/20877, in_queue=43524, util=96.36% 00:41:14.385 nvme0n4: ios=4121/4275, merge=0/0, ticks=35059/34429, in_queue=69488, util=90.67% 00:41:14.385 22:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:14.385 [global] 00:41:14.385 thread=1 00:41:14.385 invalidate=1 00:41:14.385 rw=randwrite 00:41:14.385 time_based=1 00:41:14.385 runtime=1 00:41:14.385 ioengine=libaio 00:41:14.385 direct=1 00:41:14.385 bs=4096 00:41:14.385 iodepth=128 00:41:14.385 norandommap=0 00:41:14.385 numjobs=1 00:41:14.385 00:41:14.385 verify_dump=1 00:41:14.385 verify_backlog=512 00:41:14.385 verify_state_save=0 00:41:14.385 do_verify=1 00:41:14.385 verify=crc32c-intel 00:41:14.385 [job0] 00:41:14.385 filename=/dev/nvme0n1 00:41:14.385 [job1] 00:41:14.385 filename=/dev/nvme0n2 00:41:14.385 [job2] 00:41:14.385 filename=/dev/nvme0n3 00:41:14.385 [job3] 00:41:14.385 filename=/dev/nvme0n4 00:41:14.385 Could not set queue depth (nvme0n1) 00:41:14.385 Could not set queue depth (nvme0n2) 00:41:14.385 Could not set queue depth (nvme0n3) 00:41:14.385 Could not set queue depth (nvme0n4) 00:41:14.643 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:14.643 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:14.643 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:14.643 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:14.643 fio-3.35 00:41:14.643 Starting 4 threads 00:41:16.012 00:41:16.012 job0: (groupid=0, jobs=1): err= 0: pid=610434: Mon Dec 16 22:47:05 2024 00:41:16.012 read: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1006msec) 00:41:16.012 slat (nsec): min=1161, max=12725k, avg=89443.78, stdev=767449.00 00:41:16.012 clat (usec): min=1312, max=30100, avg=12887.82, stdev=3874.89 00:41:16.012 lat (usec): min=1335, max=30108, avg=12977.26, stdev=3921.20 00:41:16.012 clat percentiles (usec): 00:41:16.012 | 1.00th=[ 3490], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10421], 00:41:16.012 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[12911], 00:41:16.012 | 70.00th=[13566], 80.00th=[15139], 90.00th=[19268], 95.00th=[20317], 00:41:16.012 | 99.00th=[25822], 99.50th=[25822], 99.90th=[29754], 99.95th=[29754], 00:41:16.012 | 99.99th=[30016] 00:41:16.012 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:41:16.012 slat (usec): min=2, max=12569, avg=92.43, stdev=704.67 00:41:16.012 clat (usec): min=524, max=31592, avg=12056.30, stdev=4411.26 00:41:16.012 lat (usec): min=532, max=31620, avg=12148.73, stdev=4469.07 00:41:16.012 clat percentiles (usec): 00:41:16.012 | 1.00th=[ 2868], 5.00th=[ 5932], 10.00th=[ 7308], 20.00th=[ 8979], 00:41:16.012 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11469], 60.00th=[11994], 00:41:16.012 | 70.00th=[12518], 80.00th=[14091], 90.00th=[19530], 95.00th=[21365], 00:41:16.012 | 99.00th=[24773], 99.50th=[25297], 99.90th=[27395], 99.95th=[30016], 00:41:16.012 | 99.99th=[31589] 00:41:16.012 bw ( KiB/s): min=20480, max=20480, per=27.90%, avg=20480.00, stdev= 0.00, samples=2 00:41:16.012 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:41:16.012 lat (usec) : 750=0.03% 00:41:16.012 lat (msec) : 2=0.20%, 4=1.76%, 10=18.99%, 20=71.45%, 50=7.58% 00:41:16.012 cpu : usr=4.08%, sys=5.07%, ctx=352, majf=0, minf=1 00:41:16.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:16.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:16.012 issued rwts: total=5097,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:16.012 job1: (groupid=0, jobs=1): err= 0: pid=610435: Mon Dec 16 22:47:05 2024 00:41:16.012 read: IOPS=4351, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1006msec) 00:41:16.012 slat (nsec): min=1044, max=22338k, avg=111360.96, stdev=845526.27 00:41:16.012 clat (usec): min=1192, max=65010, avg=14271.56, stdev=7939.62 00:41:16.012 lat (usec): min=3705, max=65067, avg=14382.92, stdev=8000.44 00:41:16.012 clat percentiles (usec): 00:41:16.012 | 1.00th=[ 5866], 5.00th=[ 7373], 10.00th=[ 8586], 20.00th=[ 9634], 00:41:16.012 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12256], 60.00th=[12518], 00:41:16.012 | 70.00th=[13304], 80.00th=[15401], 90.00th=[22414], 95.00th=[29492], 00:41:16.012 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:41:16.012 | 99.99th=[64750] 00:41:16.012 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:41:16.012 slat (nsec): min=1865, max=15282k, avg=106613.75, stdev=717617.09 00:41:16.012 clat (usec): min=3843, max=73792, avg=13938.97, stdev=9382.65 00:41:16.012 lat (usec): min=3851, max=73798, avg=14045.58, stdev=9436.26 00:41:16.012 clat percentiles (usec): 00:41:16.012 | 1.00th=[ 4752], 5.00th=[ 6587], 10.00th=[ 7898], 20.00th=[ 9503], 00:41:16.012 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11863], 60.00th=[11994], 00:41:16.012 | 70.00th=[12256], 80.00th=[13698], 90.00th=[25560], 95.00th=[31851], 00:41:16.012 | 99.00th=[58459], 99.50th=[68682], 99.90th=[73925], 99.95th=[73925], 00:41:16.012 | 99.99th=[73925] 00:41:16.012 bw ( KiB/s): min=17880, max=18984, per=25.11%, avg=18432.00, stdev=780.65, samples=2 00:41:16.012 iops : min= 4470, max= 4746, avg=4608.00, stdev=195.16, samples=2 00:41:16.012 lat (msec) : 2=0.01%, 4=0.41%, 10=24.14%, 20=61.46%, 50=12.47% 00:41:16.012 lat (msec) : 100=1.50% 00:41:16.012 cpu : usr=3.38%, sys=4.08%, ctx=466, majf=0, minf=1 00:41:16.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:16.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:16.012 issued rwts: total=4378,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:16.012 job2: (groupid=0, jobs=1): err= 0: pid=610436: Mon Dec 16 22:47:05 2024 00:41:16.012 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:41:16.012 slat (nsec): min=1092, max=30108k, avg=133526.95, stdev=937034.26 00:41:16.012 clat (usec): min=3111, max=57564, avg=16206.71, stdev=7300.47 00:41:16.012 lat (usec): min=4003, max=58589, avg=16340.24, stdev=7354.48 00:41:16.012 clat percentiles (usec): 00:41:16.012 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[11731], 20.00th=[12518], 00:41:16.012 | 30.00th=[12911], 40.00th=[14091], 50.00th=[14484], 60.00th=[14877], 00:41:16.012 | 70.00th=[15795], 80.00th=[16909], 90.00th=[23462], 95.00th=[26084], 00:41:16.012 | 99.00th=[49546], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:41:16.012 | 99.99th=[57410] 00:41:16.012 write: IOPS=3607, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec); 0 zone resets 00:41:16.012 slat (nsec): min=1842, max=20028k, avg=138369.12, stdev=909854.09 00:41:16.012 clat (usec): min=321, max=65950, avg=18492.91, stdev=10310.09 00:41:16.012 lat (usec): min=2480, max=65962, avg=18631.28, stdev=10367.18 00:41:16.012 clat percentiles (usec): 00:41:16.012 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[11994], 20.00th=[13042], 00:41:16.012 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:41:16.012 | 70.00th=[17957], 80.00th=[20841], 90.00th=[28443], 95.00th=[41157], 00:41:16.012 | 99.00th=[61080], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:41:16.012 | 99.99th=[65799] 00:41:16.012 bw ( KiB/s): min=12288, max=12288, per=16.74%, avg=12288.00, stdev= 0.00, samples=1 00:41:16.012 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:41:16.012 lat (usec) : 500=0.01% 00:41:16.012 lat (msec) : 4=0.38%, 10=2.11%, 20=77.68%, 50=17.51%, 100=2.31% 00:41:16.012 cpu : usr=2.40%, sys=4.30%, ctx=332, majf=0, minf=1 00:41:16.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:41:16.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:16.012 issued rwts: total=3584,3611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:16.012 job3: (groupid=0, jobs=1): err= 0: pid=610437: Mon Dec 16 22:47:05 2024 00:41:16.013 read: IOPS=4731, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1003msec) 00:41:16.013 slat (nsec): min=1347, max=7278.5k, avg=98226.56, stdev=542885.89 00:41:16.013 clat (usec): min=2218, max=29209, avg=12274.36, stdev=2958.81 00:41:16.013 lat (usec): min=2436, max=29663, avg=12372.59, stdev=2985.84 00:41:16.013 clat percentiles (usec): 00:41:16.013 | 1.00th=[ 4752], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[10028], 00:41:16.013 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:41:16.013 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15401], 95.00th=[15926], 00:41:16.013 | 99.00th=[20317], 99.50th=[24511], 99.90th=[29230], 99.95th=[29230], 00:41:16.013 | 99.99th=[29230] 00:41:16.013 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:41:16.013 slat (nsec): min=1953, max=9418.7k, avg=96958.49, stdev=481470.37 00:41:16.013 clat (usec): min=227, max=60994, avg=13465.75, stdev=6341.63 00:41:16.013 lat (usec): min=473, max=60997, avg=13562.71, stdev=6371.10 00:41:16.013 clat percentiles (usec): 00:41:16.013 | 1.00th=[ 2900], 5.00th=[ 6521], 10.00th=[ 9110], 20.00th=[11076], 00:41:16.013 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13042], 60.00th=[13435], 00:41:16.013 | 70.00th=[13698], 80.00th=[13960], 90.00th=[15795], 95.00th=[25822], 00:41:16.013 | 99.00th=[44827], 99.50th=[57410], 99.90th=[60031], 99.95th=[61080], 00:41:16.013 | 99.99th=[61080] 00:41:16.013 bw ( KiB/s): min=20480, max=20480, per=27.90%, avg=20480.00, stdev= 0.00, samples=2 00:41:16.013 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:41:16.013 lat (usec) : 250=0.01%, 500=0.03%, 750=0.08%, 1000=0.01% 00:41:16.013 lat (msec) : 2=0.19%, 4=1.51%, 10=14.79%, 20=79.71%, 50=3.20% 00:41:16.013 lat (msec) : 100=0.47% 00:41:16.013 cpu : usr=3.39%, sys=6.39%, ctx=562, majf=0, minf=2 00:41:16.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:16.013 issued rwts: total=4746,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:16.013 00:41:16.013 Run status group 0 (all jobs): 00:41:16.013 READ: bw=69.1MiB/s (72.5MB/s), 14.0MiB/s-19.8MiB/s (14.7MB/s-20.8MB/s), io=69.6MiB (72.9MB), run=1001-1006msec 00:41:16.013 WRITE: bw=71.7MiB/s (75.2MB/s), 14.1MiB/s-19.9MiB/s (14.8MB/s-20.9MB/s), io=72.1MiB (75.6MB), run=1001-1006msec 00:41:16.013 00:41:16.013 Disk stats (read/write): 00:41:16.013 nvme0n1: ios=4049/4096, merge=0/0, ticks=47260/42621, in_queue=89881, util=97.70% 00:41:16.013 nvme0n2: ios=3620/3919, merge=0/0, ticks=30535/29000, in_queue=59535, util=95.79% 00:41:16.013 nvme0n3: ios=2600/2800, merge=0/0, ticks=18522/17080, in_queue=35602, util=99.68% 00:41:16.013 nvme0n4: ios=3884/4096, merge=0/0, ticks=18699/26469, in_queue=45168, util=89.23% 00:41:16.013 22:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:16.013 22:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=610657 00:41:16.013 22:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:16.013 22:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:16.013 [global] 00:41:16.013 thread=1 00:41:16.013 invalidate=1 00:41:16.013 rw=read 00:41:16.013 time_based=1 00:41:16.013 runtime=10 00:41:16.013 ioengine=libaio 00:41:16.013 direct=1 00:41:16.013 bs=4096 00:41:16.013 iodepth=1 00:41:16.013 norandommap=1 00:41:16.013 numjobs=1 00:41:16.013 00:41:16.013 [job0] 00:41:16.013 filename=/dev/nvme0n1 00:41:16.013 [job1] 00:41:16.013 filename=/dev/nvme0n2 00:41:16.013 [job2] 00:41:16.013 filename=/dev/nvme0n3 00:41:16.013 [job3] 00:41:16.013 filename=/dev/nvme0n4 00:41:16.013 Could not set queue depth (nvme0n1) 00:41:16.013 Could not set queue depth (nvme0n2) 00:41:16.013 Could not set queue depth (nvme0n3) 00:41:16.013 Could not set queue depth (nvme0n4) 00:41:16.013 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:16.013 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:16.013 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:16.013 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:16.013 fio-3.35 00:41:16.013 Starting 4 threads 00:41:19.287 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:19.287 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:19.287 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3035136, buflen=4096 00:41:19.287 fio: pid=610800, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:19.287 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:19.287 22:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:19.287 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:41:19.287 fio: pid=610799, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:19.287 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55390208, buflen=4096 00:41:19.287 fio: pid=610793, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:19.544 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:19.544 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:19.544 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=24428544, buflen=4096 00:41:19.544 fio: pid=610796, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:19.544 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:19.544 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:19.802 00:41:19.802 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610793: Mon Dec 16 22:47:09 2024 00:41:19.802 read: IOPS=4324, BW=16.9MiB/s (17.7MB/s)(52.8MiB/3127msec) 00:41:19.802 slat (usec): min=5, max=26763, avg=13.85, stdev=349.33 00:41:19.802 clat (usec): min=171, max=40876, avg=214.78, stdev=494.72 00:41:19.802 lat (usec): min=187, max=40885, avg=228.63, stdev=606.30 00:41:19.802 clat percentiles (usec): 00:41:19.802 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:41:19.802 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:41:19.802 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:41:19.802 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 396], 00:41:19.802 | 99.99th=[40633] 00:41:19.802 bw ( KiB/s): min=14664, max=19640, per=72.51%, avg=17441.50, stdev=2350.63, samples=6 00:41:19.802 iops : min= 3666, max= 4910, avg=4360.33, stdev=587.71, samples=6 00:41:19.802 lat (usec) : 250=96.80%, 500=3.17%, 750=0.01% 00:41:19.802 lat (msec) : 50=0.01% 00:41:19.802 cpu : usr=2.37%, sys=6.78%, ctx=13529, majf=0, minf=1 00:41:19.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 issued rwts: total=13524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:19.802 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610796: Mon Dec 16 22:47:09 2024 00:41:19.802 read: IOPS=1766, BW=7066KiB/s (7236kB/s)(23.3MiB/3376msec) 00:41:19.802 slat (usec): min=6, max=15713, avg=21.84, stdev=425.76 00:41:19.802 clat (usec): min=181, max=42006, avg=538.02, stdev=3329.13 00:41:19.802 lat (usec): min=188, max=42017, avg=559.86, stdev=3356.36 00:41:19.802 clat percentiles (usec): 00:41:19.802 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 217], 00:41:19.802 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:41:19.802 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 412], 95.00th=[ 429], 00:41:19.802 | 99.00th=[ 445], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:19.802 | 99.99th=[42206] 00:41:19.802 bw ( KiB/s): min= 96, max=13264, per=25.76%, avg=6196.00, stdev=6383.21, samples=6 00:41:19.802 iops : min= 24, max= 3316, avg=1549.00, stdev=1595.80, samples=6 00:41:19.802 lat (usec) : 250=46.17%, 500=53.09%, 750=0.03% 00:41:19.802 lat (msec) : 20=0.02%, 50=0.67% 00:41:19.802 cpu : usr=0.86%, sys=1.96%, ctx=5974, majf=0, minf=2 00:41:19.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 issued rwts: total=5965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:19.802 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610799: Mon Dec 16 22:47:09 2024 00:41:19.802 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:41:19.802 slat (nsec): min=12405, max=63653, avg=24199.52, stdev=5323.86 00:41:19.802 clat (usec): min=507, max=41909, avg=40419.28, stdev=4771.37 00:41:19.802 lat (usec): min=543, max=41935, avg=40443.51, stdev=4770.01 00:41:19.802 clat percentiles (usec): 00:41:19.802 | 1.00th=[ 506], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:19.802 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:19.802 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:19.802 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:19.802 | 99.99th=[41681] 00:41:19.802 bw ( KiB/s): min= 96, max= 104, per=0.40%, avg=97.60, stdev= 3.58, samples=5 00:41:19.802 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:41:19.802 lat (usec) : 750=1.37% 00:41:19.802 lat (msec) : 50=97.26% 00:41:19.802 cpu : usr=0.00%, sys=0.14%, ctx=76, majf=0, minf=2 00:41:19.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:19.802 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=610800: Mon Dec 16 22:47:09 2024 00:41:19.802 read: IOPS=271, BW=1084KiB/s (1110kB/s)(2964KiB/2734msec) 00:41:19.802 slat (nsec): min=6291, max=55487, avg=8748.96, stdev=5157.96 00:41:19.802 clat (usec): min=215, max=41376, avg=3649.76, stdev=11253.98 00:41:19.802 lat (usec): min=225, max=41385, avg=3658.49, stdev=11257.18 00:41:19.802 clat percentiles (usec): 00:41:19.802 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:41:19.802 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:41:19.802 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 334], 95.00th=[41157], 00:41:19.802 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:19.802 | 99.99th=[41157] 00:41:19.802 bw ( KiB/s): min= 112, max= 4400, per=4.87%, avg=1171.20, stdev=1852.74, samples=5 00:41:19.802 iops : min= 28, max= 1100, avg=292.80, stdev=463.19, samples=5 00:41:19.802 lat (usec) : 250=62.40%, 500=28.71%, 750=0.27% 00:41:19.802 lat (msec) : 2=0.13%, 50=8.36% 00:41:19.802 cpu : usr=0.11%, sys=0.26%, ctx=745, majf=0, minf=2 00:41:19.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.802 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:19.802 00:41:19.802 Run status group 0 (all jobs): 00:41:19.802 READ: bw=23.5MiB/s (24.6MB/s), 98.1KiB/s-16.9MiB/s (100kB/s-17.7MB/s), io=79.3MiB (83.1MB), run=2734-3376msec 00:41:19.802 00:41:19.802 Disk stats (read/write): 00:41:19.802 nvme0n1: ios=13463/0, merge=0/0, ticks=2729/0, in_queue=2729, util=93.38% 00:41:19.802 nvme0n2: ios=5997/0, merge=0/0, ticks=4013/0, in_queue=4013, util=97.65% 00:41:19.802 nvme0n3: ios=112/0, merge=0/0, ticks=3022/0, in_queue=3022, util=99.59% 00:41:19.802 nvme0n4: ios=780/0, merge=0/0, ticks=3547/0, in_queue=3547, util=99.59% 00:41:19.802 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:19.802 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:20.060 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:20.060 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:20.317 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:20.317 22:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:20.574 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:20.574 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:20.574 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:20.574 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 610657 00:41:20.574 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:20.574 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:20.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:20.831 nvmf hotplug test: fio failed as expected 00:41:20.831 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:21.089 rmmod nvme_tcp 00:41:21.089 rmmod nvme_fabrics 00:41:21.089 rmmod nvme_keyring 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 608241 ']' 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 608241 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 608241 ']' 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 608241 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608241 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608241' 00:41:21.089 killing process with pid 608241 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 608241 00:41:21.089 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 608241 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:21.348 22:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:23.890 00:41:23.890 real 0m25.740s 00:41:23.890 user 1m30.639s 00:41:23.890 sys 0m10.785s 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:23.890 ************************************ 00:41:23.890 END TEST nvmf_fio_target 00:41:23.890 ************************************ 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:23.890 ************************************ 00:41:23.890 START TEST nvmf_bdevio 00:41:23.890 ************************************ 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:23.890 * Looking for test storage... 00:41:23.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:23.890 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.891 --rc genhtml_branch_coverage=1 00:41:23.891 --rc genhtml_function_coverage=1 00:41:23.891 --rc genhtml_legend=1 00:41:23.891 --rc geninfo_all_blocks=1 00:41:23.891 --rc geninfo_unexecuted_blocks=1 00:41:23.891 00:41:23.891 ' 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.891 --rc genhtml_branch_coverage=1 00:41:23.891 --rc genhtml_function_coverage=1 00:41:23.891 --rc genhtml_legend=1 00:41:23.891 --rc geninfo_all_blocks=1 00:41:23.891 --rc geninfo_unexecuted_blocks=1 00:41:23.891 00:41:23.891 ' 00:41:23.891 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:23.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.892 --rc genhtml_branch_coverage=1 00:41:23.892 --rc genhtml_function_coverage=1 00:41:23.892 --rc genhtml_legend=1 00:41:23.892 --rc geninfo_all_blocks=1 00:41:23.892 --rc geninfo_unexecuted_blocks=1 00:41:23.892 00:41:23.892 ' 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:23.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.892 --rc genhtml_branch_coverage=1 00:41:23.892 --rc genhtml_function_coverage=1 00:41:23.892 --rc genhtml_legend=1 00:41:23.892 --rc geninfo_all_blocks=1 00:41:23.892 --rc geninfo_unexecuted_blocks=1 00:41:23.892 00:41:23.892 ' 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:23.892 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:23.896 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:23.897 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:23.898 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:23.898 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:23.898 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:23.898 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:23.898 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:23.898 22:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:29.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:29.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:29.173 Found net devices under 0000:af:00.0: cvl_0_0 00:41:29.173 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:29.174 Found net devices under 0000:af:00.1: cvl_0_1 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:29.174 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:29.433 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:29.433 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:29.433 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:29.433 22:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:29.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:29.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:41:29.433 00:41:29.433 --- 10.0.0.2 ping statistics --- 00:41:29.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.433 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:29.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:29.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:41:29.433 00:41:29.433 --- 10.0.0.1 ping statistics --- 00:41:29.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.433 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=614957 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 614957 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 614957 ']' 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:29.433 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.692 [2024-12-16 22:47:19.147181] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:29.692 [2024-12-16 22:47:19.148101] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:29.692 [2024-12-16 22:47:19.148134] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:29.692 [2024-12-16 22:47:19.227629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:29.692 [2024-12-16 22:47:19.250089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:29.692 [2024-12-16 22:47:19.250128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:29.692 [2024-12-16 22:47:19.250135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:29.692 [2024-12-16 22:47:19.250142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:29.692 [2024-12-16 22:47:19.250146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:29.692 [2024-12-16 22:47:19.251646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:29.692 [2024-12-16 22:47:19.251753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:29.692 [2024-12-16 22:47:19.251859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:29.692 [2024-12-16 22:47:19.251860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:29.692 [2024-12-16 22:47:19.314291] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:29.692 [2024-12-16 22:47:19.315390] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:29.692 [2024-12-16 22:47:19.315572] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:29.692 [2024-12-16 22:47:19.315943] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:29.692 [2024-12-16 22:47:19.315986] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:29.692 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:29.692 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:29.692 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:29.692 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:29.692 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.692 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:29.693 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:29.693 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.693 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.693 [2024-12-16 22:47:19.380667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.950 Malloc0 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.950 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:29.951 [2024-12-16 22:47:19.460721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:29.951 { 00:41:29.951 "params": { 00:41:29.951 "name": "Nvme$subsystem", 00:41:29.951 "trtype": "$TEST_TRANSPORT", 00:41:29.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.951 "adrfam": "ipv4", 00:41:29.951 "trsvcid": "$NVMF_PORT", 00:41:29.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.951 "hdgst": ${hdgst:-false}, 00:41:29.951 "ddgst": ${ddgst:-false} 00:41:29.951 }, 00:41:29.951 "method": "bdev_nvme_attach_controller" 00:41:29.951 } 00:41:29.951 EOF 00:41:29.951 )") 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:29.951 22:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:29.951 "params": { 00:41:29.951 "name": "Nvme1", 00:41:29.951 "trtype": "tcp", 00:41:29.951 "traddr": "10.0.0.2", 00:41:29.951 "adrfam": "ipv4", 00:41:29.951 "trsvcid": "4420", 00:41:29.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:29.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:29.951 "hdgst": false, 00:41:29.951 "ddgst": false 00:41:29.951 }, 00:41:29.951 "method": "bdev_nvme_attach_controller" 00:41:29.951 }' 00:41:29.951 [2024-12-16 22:47:19.511040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:29.951 [2024-12-16 22:47:19.511082] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615146 ] 00:41:29.951 [2024-12-16 22:47:19.583364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:29.951 [2024-12-16 22:47:19.608109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.951 [2024-12-16 22:47:19.608227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.951 [2024-12-16 22:47:19.608228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:30.207 I/O targets: 00:41:30.207 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:30.207 00:41:30.207 00:41:30.207 CUnit - A unit testing framework for C - Version 2.1-3 00:41:30.207 http://cunit.sourceforge.net/ 00:41:30.207 00:41:30.207 00:41:30.207 Suite: bdevio tests on: Nvme1n1 00:41:30.207 Test: blockdev write read block ...passed 00:41:30.207 Test: blockdev write zeroes read block ...passed 00:41:30.207 Test: blockdev write zeroes read no split ...passed 00:41:30.464 Test: blockdev write zeroes read split ...passed 00:41:30.464 Test: blockdev write zeroes read split partial ...passed 00:41:30.464 Test: blockdev reset ...[2024-12-16 22:47:19.946222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:30.464 [2024-12-16 22:47:19.946286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfe630 (9): Bad file descriptor 00:41:30.464 [2024-12-16 22:47:19.949881] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:30.464 passed 00:41:30.464 Test: blockdev write read 8 blocks ...passed 00:41:30.464 Test: blockdev write read size > 128k ...passed 00:41:30.464 Test: blockdev write read invalid size ...passed 00:41:30.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:30.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:30.464 Test: blockdev write read max offset ...passed 00:41:30.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:30.464 Test: blockdev writev readv 8 blocks ...passed 00:41:30.464 Test: blockdev writev readv 30 x 1block ...passed 00:41:30.464 Test: blockdev writev readv block ...passed 00:41:30.464 Test: blockdev writev readv size > 128k ...passed 00:41:30.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:30.464 Test: blockdev comparev and writev ...[2024-12-16 22:47:20.123566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.123602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.123621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.123632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.123930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.123943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.123958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.123970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.124269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.124282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.124298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.124309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.124608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.124620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:30.464 [2024-12-16 22:47:20.124635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:30.464 [2024-12-16 22:47:20.124646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:30.464 passed 00:41:30.722 Test: blockdev nvme passthru rw ...passed 00:41:30.722 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:47:20.207576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:30.722 [2024-12-16 22:47:20.207598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:30.722 [2024-12-16 22:47:20.207724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:30.722 [2024-12-16 22:47:20.207738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:30.722 [2024-12-16 22:47:20.207854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:30.722 [2024-12-16 22:47:20.207865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:30.722 [2024-12-16 22:47:20.207978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:30.722 [2024-12-16 22:47:20.207989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:30.722 passed 00:41:30.722 Test: blockdev nvme admin passthru ...passed 00:41:30.722 Test: blockdev copy ...passed 00:41:30.722 00:41:30.722 Run Summary: Type Total Ran Passed Failed Inactive 00:41:30.722 suites 1 1 n/a 0 0 00:41:30.722 tests 23 23 23 0 0 00:41:30.722 asserts 152 152 152 0 n/a 00:41:30.722 00:41:30.722 Elapsed time = 1.033 seconds 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:30.722 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:30.722 rmmod nvme_tcp 00:41:30.981 rmmod nvme_fabrics 00:41:30.981 rmmod nvme_keyring 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 614957 ']' 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 614957 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 614957 ']' 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 614957 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 614957 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 614957' 00:41:30.981 killing process with pid 614957 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 614957 00:41:30.981 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 614957 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:31.240 22:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.144 22:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:33.144 00:41:33.144 real 0m9.728s 00:41:33.144 user 0m8.029s 00:41:33.144 sys 0m5.062s 00:41:33.144 22:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.144 22:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:33.144 ************************************ 00:41:33.144 END TEST nvmf_bdevio 00:41:33.144 ************************************ 00:41:33.404 22:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:33.404 00:41:33.404 real 4m29.478s 00:41:33.404 user 9m1.156s 00:41:33.404 sys 1m49.715s 00:41:33.404 22:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.404 22:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:33.404 ************************************ 00:41:33.404 END TEST nvmf_target_core_interrupt_mode 00:41:33.404 ************************************ 00:41:33.404 22:47:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:33.404 22:47:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:33.404 22:47:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.404 22:47:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.404 ************************************ 00:41:33.404 START TEST nvmf_interrupt 00:41:33.404 ************************************ 00:41:33.404 22:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:33.404 * Looking for test storage... 00:41:33.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.404 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.664 --rc genhtml_branch_coverage=1 00:41:33.664 --rc genhtml_function_coverage=1 00:41:33.664 --rc genhtml_legend=1 00:41:33.664 --rc geninfo_all_blocks=1 00:41:33.664 --rc geninfo_unexecuted_blocks=1 00:41:33.664 00:41:33.664 ' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.664 --rc genhtml_branch_coverage=1 00:41:33.664 --rc genhtml_function_coverage=1 00:41:33.664 --rc genhtml_legend=1 00:41:33.664 --rc geninfo_all_blocks=1 00:41:33.664 --rc geninfo_unexecuted_blocks=1 00:41:33.664 00:41:33.664 ' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.664 --rc genhtml_branch_coverage=1 00:41:33.664 --rc genhtml_function_coverage=1 00:41:33.664 --rc genhtml_legend=1 00:41:33.664 --rc geninfo_all_blocks=1 00:41:33.664 --rc geninfo_unexecuted_blocks=1 00:41:33.664 00:41:33.664 ' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.664 --rc genhtml_branch_coverage=1 00:41:33.664 --rc genhtml_function_coverage=1 00:41:33.664 --rc genhtml_legend=1 00:41:33.664 --rc geninfo_all_blocks=1 00:41:33.664 --rc geninfo_unexecuted_blocks=1 00:41:33.664 00:41:33.664 ' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:33.664 22:47:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:40.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:40.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.244 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:40.245 Found net devices under 0000:af:00.0: cvl_0_0 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:40.245 Found net devices under 0000:af:00.1: cvl_0_1 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:40.245 22:47:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:40.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:41:40.245 00:41:40.245 --- 10.0.0.2 ping statistics --- 00:41:40.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.245 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:40.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:41:40.245 00:41:40.245 --- 10.0.0.1 ping statistics --- 00:41:40.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.245 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=618683 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 618683 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 618683 ']' 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 [2024-12-16 22:47:29.113295] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:40.245 [2024-12-16 22:47:29.114203] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:40.245 [2024-12-16 22:47:29.114235] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:40.245 [2024-12-16 22:47:29.191042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:40.245 [2024-12-16 22:47:29.212755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:40.245 [2024-12-16 22:47:29.212790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:40.245 [2024-12-16 22:47:29.212797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:40.245 [2024-12-16 22:47:29.212803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:40.245 [2024-12-16 22:47:29.212808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:40.245 [2024-12-16 22:47:29.213838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:40.245 [2024-12-16 22:47:29.213839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.245 [2024-12-16 22:47:29.276036] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:40.245 [2024-12-16 22:47:29.276597] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:40.245 [2024-12-16 22:47:29.276774] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:40.245 5000+0 records in 00:41:40.245 5000+0 records out 00:41:40.245 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0165983 s, 617 MB/s 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 AIO0 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 [2024-12-16 22:47:29.410672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.245 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:40.246 [2024-12-16 22:47:29.442918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 618683 0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618683 0 idle 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618683 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618683 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 618683 1 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618683 1 idle 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618687 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618687 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=618937 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 618683 0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 618683 0 busy 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:40.246 22:47:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618683 root 20 0 128.2g 46848 33792 R 73.3 0.1 0:00.33 reactor_0' 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618683 root 20 0 128.2g 46848 33792 R 73.3 0.1 0:00.33 reactor_0 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 618683 1 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 618683 1 busy 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618687 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.24 reactor_1' 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618687 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.24 reactor_1 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:40.504 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:40.761 22:47:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 618937 00:41:50.722 Initializing NVMe Controllers 00:41:50.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:50.722 Controller IO queue size 256, less than required. 00:41:50.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:50.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:50.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:50.722 Initialization complete. Launching workers. 00:41:50.722 ======================================================== 00:41:50.722 Latency(us) 00:41:50.722 Device Information : IOPS MiB/s Average min max 00:41:50.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16169.50 63.16 15840.38 2718.43 30844.77 00:41:50.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16351.60 63.87 15661.35 6987.25 26562.63 00:41:50.722 ======================================================== 00:41:50.722 Total : 32521.10 127.04 15750.36 2718.43 30844.77 00:41:50.722 00:41:50.722 [2024-12-16 22:47:39.995925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda8bd0 is same with the state(6) to be set 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 618683 0 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618683 0 idle 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:50.722 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618683 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.20 reactor_0' 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618683 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.20 reactor_0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 618683 1 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618683 1 idle 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618687 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618687 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:50.723 22:47:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:51.292 22:47:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:51.292 22:47:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:51.292 22:47:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:51.292 22:47:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:51.292 22:47:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 618683 0 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618683 0 idle 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:53.199 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618683 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0' 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618683 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.47 reactor_0 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 618683 1 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 618683 1 idle 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=618683 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:53.458 22:47:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:53.458 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:53.458 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:53.458 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:53.458 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:53.458 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 618683 -w 256 00:41:53.458 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 618687 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.11 reactor_1' 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 618687 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.11 reactor_1 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:53.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:53.718 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:53.718 rmmod nvme_tcp 00:41:53.718 rmmod nvme_fabrics 00:41:53.718 rmmod nvme_keyring 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 618683 ']' 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 618683 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 618683 ']' 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 618683 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 618683 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 618683' 00:41:53.977 killing process with pid 618683 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 618683 00:41:53.977 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 618683 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:54.236 22:47:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:56.140 22:47:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:56.141 00:41:56.141 real 0m22.852s 00:41:56.141 user 0m39.704s 00:41:56.141 sys 0m8.320s 00:41:56.141 22:47:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:56.141 22:47:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:56.141 ************************************ 00:41:56.141 END TEST nvmf_interrupt 00:41:56.141 ************************************ 00:41:56.141 00:41:56.141 real 35m25.485s 00:41:56.141 user 86m12.791s 00:41:56.141 sys 10m18.751s 00:41:56.141 22:47:45 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:56.141 22:47:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.141 ************************************ 00:41:56.141 END TEST nvmf_tcp 00:41:56.141 ************************************ 00:41:56.400 22:47:45 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:56.400 22:47:45 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:56.400 22:47:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:56.400 22:47:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:56.400 22:47:45 -- common/autotest_common.sh@10 -- # set +x 00:41:56.400 ************************************ 00:41:56.400 START TEST spdkcli_nvmf_tcp 00:41:56.400 ************************************ 00:41:56.400 22:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:56.401 * Looking for test storage... 00:41:56.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:56.401 22:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:56.401 22:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:56.401 22:47:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.401 --rc genhtml_branch_coverage=1 00:41:56.401 --rc genhtml_function_coverage=1 00:41:56.401 --rc genhtml_legend=1 00:41:56.401 --rc geninfo_all_blocks=1 00:41:56.401 --rc geninfo_unexecuted_blocks=1 00:41:56.401 00:41:56.401 ' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.401 --rc genhtml_branch_coverage=1 00:41:56.401 --rc genhtml_function_coverage=1 00:41:56.401 --rc genhtml_legend=1 00:41:56.401 --rc geninfo_all_blocks=1 00:41:56.401 --rc geninfo_unexecuted_blocks=1 00:41:56.401 00:41:56.401 ' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.401 --rc genhtml_branch_coverage=1 00:41:56.401 --rc genhtml_function_coverage=1 00:41:56.401 --rc genhtml_legend=1 00:41:56.401 --rc geninfo_all_blocks=1 00:41:56.401 --rc geninfo_unexecuted_blocks=1 00:41:56.401 00:41:56.401 ' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:56.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:56.401 --rc genhtml_branch_coverage=1 00:41:56.401 --rc genhtml_function_coverage=1 00:41:56.401 --rc genhtml_legend=1 00:41:56.401 --rc geninfo_all_blocks=1 00:41:56.401 --rc geninfo_unexecuted_blocks=1 00:41:56.401 00:41:56.401 ' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:56.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:56.401 22:47:46 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=621557 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 621557 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 621557 ']' 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:56.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.660 [2024-12-16 22:47:46.154647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:56.660 [2024-12-16 22:47:46.154695] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621557 ] 00:41:56.660 [2024-12-16 22:47:46.226494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:56.660 [2024-12-16 22:47:46.250183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.660 [2024-12-16 22:47:46.250186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:56.660 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.918 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:56.918 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:56.918 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:56.918 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:56.918 22:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.918 22:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:56.918 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:56.918 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:56.918 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:56.918 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:56.918 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:56.918 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:56.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:56.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:56.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:56.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:56.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:56.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:56.919 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:56.919 ' 00:41:59.448 [2024-12-16 22:47:49.098117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:00.823 [2024-12-16 22:47:50.438534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:03.423 [2024-12-16 22:47:52.922185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:05.403 [2024-12-16 22:47:55.080928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:07.302 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:07.302 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:07.302 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:07.302 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:07.302 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:07.302 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:07.302 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:07.302 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:07.302 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:07.302 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:07.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:07.302 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:07.302 22:47:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:07.869 22:47:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:07.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:07.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:07.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:07.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:07.869 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:07.869 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:07.869 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:07.869 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:07.869 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:07.869 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:07.869 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:07.869 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:07.869 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:07.869 ' 00:42:14.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:14.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:14.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:14.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:14.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:14.431 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:14.431 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:14.431 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:14.431 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:14.431 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:14.431 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:14.431 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:14.431 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:14.431 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:14.431 22:48:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:14.431 22:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:14.431 22:48:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:14.431 22:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 621557 00:42:14.431 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621557 ']' 00:42:14.431 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621557 00:42:14.431 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621557 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621557' 00:42:14.432 killing process with pid 621557 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 621557 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 621557 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 621557 ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 621557 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 621557 ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 621557 00:42:14.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (621557) - No such process 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 621557 is not found' 00:42:14.432 Process with pid 621557 is not found 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:14.432 00:42:14.432 real 0m17.348s 00:42:14.432 user 0m38.190s 00:42:14.432 sys 0m0.892s 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.432 22:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:14.432 ************************************ 00:42:14.432 END TEST spdkcli_nvmf_tcp 00:42:14.432 ************************************ 00:42:14.432 22:48:03 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:14.432 22:48:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:14.432 22:48:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:14.432 22:48:03 -- common/autotest_common.sh@10 -- # set +x 00:42:14.432 ************************************ 00:42:14.432 START TEST nvmf_identify_passthru 00:42:14.432 ************************************ 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:14.432 * Looking for test storage... 00:42:14.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:14.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.432 --rc genhtml_branch_coverage=1 00:42:14.432 --rc genhtml_function_coverage=1 00:42:14.432 --rc genhtml_legend=1 00:42:14.432 --rc geninfo_all_blocks=1 00:42:14.432 --rc geninfo_unexecuted_blocks=1 00:42:14.432 00:42:14.432 ' 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:14.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.432 --rc genhtml_branch_coverage=1 00:42:14.432 --rc genhtml_function_coverage=1 00:42:14.432 --rc genhtml_legend=1 00:42:14.432 --rc geninfo_all_blocks=1 00:42:14.432 --rc geninfo_unexecuted_blocks=1 00:42:14.432 00:42:14.432 ' 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:14.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.432 --rc genhtml_branch_coverage=1 00:42:14.432 --rc genhtml_function_coverage=1 00:42:14.432 --rc genhtml_legend=1 00:42:14.432 --rc geninfo_all_blocks=1 00:42:14.432 --rc geninfo_unexecuted_blocks=1 00:42:14.432 00:42:14.432 ' 00:42:14.432 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:14.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:14.432 --rc genhtml_branch_coverage=1 00:42:14.432 --rc genhtml_function_coverage=1 00:42:14.432 --rc genhtml_legend=1 00:42:14.432 --rc geninfo_all_blocks=1 00:42:14.432 --rc geninfo_unexecuted_blocks=1 00:42:14.432 00:42:14.432 ' 00:42:14.432 22:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:14.432 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:14.432 22:48:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:14.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:14.433 22:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:14.433 22:48:03 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:14.433 22:48:03 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:14.433 22:48:03 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:14.433 22:48:03 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:14.433 22:48:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:14.433 22:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.433 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:14.433 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:14.433 22:48:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:14.433 22:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:19.712 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:19.712 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.712 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:19.713 Found net devices under 0000:af:00.0: cvl_0_0 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:19.713 Found net devices under 0000:af:00.1: cvl_0_1 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:19.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:19.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:42:19.713 00:42:19.713 --- 10.0.0.2 ping statistics --- 00:42:19.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:19.713 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:19.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:19.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:42:19.713 00:42:19.713 --- 10.0.0.1 ping statistics --- 00:42:19.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:19.713 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:19.713 22:48:09 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:42:19.713 22:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:19.713 22:48:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:23.906 22:48:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:23.906 22:48:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:23.906 22:48:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:23.906 22:48:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=629164 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:28.102 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 629164 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 629164 ']' 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:28.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:28.102 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.102 [2024-12-16 22:48:17.746772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:28.102 [2024-12-16 22:48:17.746817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:28.361 [2024-12-16 22:48:17.824626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:28.361 [2024-12-16 22:48:17.848065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:28.361 [2024-12-16 22:48:17.848102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:28.361 [2024-12-16 22:48:17.848109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:28.361 [2024-12-16 22:48:17.848115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:28.361 [2024-12-16 22:48:17.848120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:28.361 [2024-12-16 22:48:17.849461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:28.361 [2024-12-16 22:48:17.849500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:28.361 [2024-12-16 22:48:17.849605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.361 [2024-12-16 22:48:17.849605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:28.361 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:28.361 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:28.361 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:28.361 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.361 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.361 INFO: Log level set to 20 00:42:28.361 INFO: Requests: 00:42:28.361 { 00:42:28.361 "jsonrpc": "2.0", 00:42:28.361 "method": "nvmf_set_config", 00:42:28.361 "id": 1, 00:42:28.361 "params": { 00:42:28.362 "admin_cmd_passthru": { 00:42:28.362 "identify_ctrlr": true 00:42:28.362 } 00:42:28.362 } 00:42:28.362 } 00:42:28.362 00:42:28.362 INFO: response: 00:42:28.362 { 00:42:28.362 "jsonrpc": "2.0", 00:42:28.362 "id": 1, 00:42:28.362 "result": true 00:42:28.362 } 00:42:28.362 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.362 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.362 INFO: Setting log level to 20 00:42:28.362 INFO: Setting log level to 20 00:42:28.362 INFO: Log level set to 20 00:42:28.362 INFO: Log level set to 20 00:42:28.362 INFO: Requests: 00:42:28.362 { 00:42:28.362 "jsonrpc": "2.0", 00:42:28.362 "method": "framework_start_init", 00:42:28.362 "id": 1 00:42:28.362 } 00:42:28.362 00:42:28.362 INFO: Requests: 00:42:28.362 { 00:42:28.362 "jsonrpc": "2.0", 00:42:28.362 "method": "framework_start_init", 00:42:28.362 "id": 1 00:42:28.362 } 00:42:28.362 00:42:28.362 [2024-12-16 22:48:17.976040] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:28.362 INFO: response: 00:42:28.362 { 00:42:28.362 "jsonrpc": "2.0", 00:42:28.362 "id": 1, 00:42:28.362 "result": true 00:42:28.362 } 00:42:28.362 00:42:28.362 INFO: response: 00:42:28.362 { 00:42:28.362 "jsonrpc": "2.0", 00:42:28.362 "id": 1, 00:42:28.362 "result": true 00:42:28.362 } 00:42:28.362 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.362 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.362 INFO: Setting log level to 40 00:42:28.362 INFO: Setting log level to 40 00:42:28.362 INFO: Setting log level to 40 00:42:28.362 [2024-12-16 22:48:17.989328] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.362 22:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:28.362 22:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:28.362 22:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:28.362 22:48:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.362 22:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:31.652 Nvme0n1 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:31.652 [2024-12-16 22:48:20.892533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:31.652 [ 00:42:31.652 { 00:42:31.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:31.652 "subtype": "Discovery", 00:42:31.652 "listen_addresses": [], 00:42:31.652 "allow_any_host": true, 00:42:31.652 "hosts": [] 00:42:31.652 }, 00:42:31.652 { 00:42:31.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:31.652 "subtype": "NVMe", 00:42:31.652 "listen_addresses": [ 00:42:31.652 { 00:42:31.652 "trtype": "TCP", 00:42:31.652 "adrfam": "IPv4", 00:42:31.652 "traddr": "10.0.0.2", 00:42:31.652 "trsvcid": "4420" 00:42:31.652 } 00:42:31.652 ], 00:42:31.652 "allow_any_host": true, 00:42:31.652 "hosts": [], 00:42:31.652 "serial_number": "SPDK00000000000001", 00:42:31.652 "model_number": "SPDK bdev Controller", 00:42:31.652 "max_namespaces": 1, 00:42:31.652 "min_cntlid": 1, 00:42:31.652 "max_cntlid": 65519, 00:42:31.652 "namespaces": [ 00:42:31.652 { 00:42:31.652 "nsid": 1, 00:42:31.652 "bdev_name": "Nvme0n1", 00:42:31.652 "name": "Nvme0n1", 00:42:31.652 "nguid": "008CBE58E3544BFE8C30BC1CDC3F9CBE", 00:42:31.652 "uuid": "008cbe58-e354-4bfe-8c30-bc1cdc3f9cbe" 00:42:31.652 } 00:42:31.652 ] 00:42:31.652 } 00:42:31.652 ] 00:42:31.652 22:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:31.652 22:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:31.652 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.652 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:31.652 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:31.652 22:48:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:31.652 rmmod nvme_tcp 00:42:31.652 rmmod nvme_fabrics 00:42:31.652 rmmod nvme_keyring 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 629164 ']' 00:42:31.652 22:48:21 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 629164 00:42:31.652 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 629164 ']' 00:42:31.652 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 629164 00:42:31.652 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:31.653 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:31.653 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629164 00:42:31.912 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:31.912 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:31.912 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629164' 00:42:31.912 killing process with pid 629164 00:42:31.912 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 629164 00:42:31.912 22:48:21 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 629164 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:33.292 22:48:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.292 22:48:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:33.292 22:48:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.198 22:48:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:35.198 00:42:35.198 real 0m21.536s 00:42:35.198 user 0m27.133s 00:42:35.198 sys 0m5.257s 00:42:35.198 22:48:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:35.198 22:48:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:35.198 ************************************ 00:42:35.198 END TEST nvmf_identify_passthru 00:42:35.198 ************************************ 00:42:35.198 22:48:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:35.198 22:48:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:35.198 22:48:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:35.198 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:42:35.459 ************************************ 00:42:35.459 START TEST nvmf_dif 00:42:35.459 ************************************ 00:42:35.459 22:48:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:35.459 * Looking for test storage... 00:42:35.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:35.459 22:48:25 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:35.459 22:48:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:35.459 22:48:25 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:35.459 22:48:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:35.459 22:48:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:35.459 22:48:25 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:35.459 22:48:25 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:35.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.459 --rc genhtml_branch_coverage=1 00:42:35.459 --rc genhtml_function_coverage=1 00:42:35.459 --rc genhtml_legend=1 00:42:35.459 --rc geninfo_all_blocks=1 00:42:35.459 --rc geninfo_unexecuted_blocks=1 00:42:35.459 00:42:35.459 ' 00:42:35.460 22:48:25 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.460 --rc genhtml_branch_coverage=1 00:42:35.460 --rc genhtml_function_coverage=1 00:42:35.460 --rc genhtml_legend=1 00:42:35.460 --rc geninfo_all_blocks=1 00:42:35.460 --rc geninfo_unexecuted_blocks=1 00:42:35.460 00:42:35.460 ' 00:42:35.460 22:48:25 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.460 --rc genhtml_branch_coverage=1 00:42:35.460 --rc genhtml_function_coverage=1 00:42:35.460 --rc genhtml_legend=1 00:42:35.460 --rc geninfo_all_blocks=1 00:42:35.460 --rc geninfo_unexecuted_blocks=1 00:42:35.460 00:42:35.460 ' 00:42:35.460 22:48:25 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.460 --rc genhtml_branch_coverage=1 00:42:35.460 --rc genhtml_function_coverage=1 00:42:35.460 --rc genhtml_legend=1 00:42:35.460 --rc geninfo_all_blocks=1 00:42:35.460 --rc geninfo_unexecuted_blocks=1 00:42:35.460 00:42:35.460 ' 00:42:35.460 22:48:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:35.460 22:48:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:35.460 22:48:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:35.460 22:48:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:35.460 22:48:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:35.460 22:48:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.460 22:48:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.460 22:48:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.460 22:48:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:35.460 22:48:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:35.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:35.460 22:48:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:35.460 22:48:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:35.460 22:48:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:35.460 22:48:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:35.460 22:48:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:35.460 22:48:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:35.460 22:48:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:35.460 22:48:25 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:35.460 22:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:42.033 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:42.033 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:42.033 Found net devices under 0000:af:00.0: cvl_0_0 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:42.033 22:48:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:42.034 Found net devices under 0000:af:00.1: cvl_0_1 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:42.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:42.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:42:42.034 00:42:42.034 --- 10.0.0.2 ping statistics --- 00:42:42.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:42.034 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:42.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:42.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:42:42.034 00:42:42.034 --- 10.0.0.1 ping statistics --- 00:42:42.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:42.034 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:42.034 22:48:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:43.940 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:43.940 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:43.940 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:44.199 22:48:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:44.199 22:48:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=634535 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:44.199 22:48:33 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 634535 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 634535 ']' 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.199 22:48:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:44.199 [2024-12-16 22:48:33.869903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:44.200 [2024-12-16 22:48:33.869950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:44.458 [2024-12-16 22:48:33.947026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.458 [2024-12-16 22:48:33.968404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:44.458 [2024-12-16 22:48:33.968436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:44.459 [2024-12-16 22:48:33.968446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:44.459 [2024-12-16 22:48:33.968452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:44.459 [2024-12-16 22:48:33.968457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:44.459 [2024-12-16 22:48:33.968948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:44.459 22:48:34 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:44.459 22:48:34 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:44.459 22:48:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:44.459 22:48:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:44.459 [2024-12-16 22:48:34.099961] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.459 22:48:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:44.459 22:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:44.459 ************************************ 00:42:44.459 START TEST fio_dif_1_default 00:42:44.459 ************************************ 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:44.459 bdev_null0 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.459 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:44.718 [2024-12-16 22:48:34.168259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.718 { 00:42:44.718 "params": { 00:42:44.718 "name": "Nvme$subsystem", 00:42:44.718 "trtype": "$TEST_TRANSPORT", 00:42:44.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.718 "adrfam": "ipv4", 00:42:44.718 "trsvcid": "$NVMF_PORT", 00:42:44.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.718 "hdgst": ${hdgst:-false}, 00:42:44.718 "ddgst": ${ddgst:-false} 00:42:44.718 }, 00:42:44.718 "method": "bdev_nvme_attach_controller" 00:42:44.718 } 00:42:44.718 EOF 00:42:44.718 )") 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:44.718 "params": { 00:42:44.718 "name": "Nvme0", 00:42:44.718 "trtype": "tcp", 00:42:44.718 "traddr": "10.0.0.2", 00:42:44.718 "adrfam": "ipv4", 00:42:44.718 "trsvcid": "4420", 00:42:44.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.718 "hdgst": false, 00:42:44.718 "ddgst": false 00:42:44.718 }, 00:42:44.718 "method": "bdev_nvme_attach_controller" 00:42:44.718 }' 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:44.718 22:48:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.977 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:44.977 fio-3.35 00:42:44.977 Starting 1 thread 00:42:57.197 00:42:57.197 filename0: (groupid=0, jobs=1): err= 0: pid=634818: Mon Dec 16 22:48:45 2024 00:42:57.197 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10021msec) 00:42:57.197 slat (nsec): min=5909, max=45904, avg=6617.00, stdev=2158.85 00:42:57.197 clat (usec): min=40896, max=46123, avg=41385.09, stdev=573.67 00:42:57.197 lat (usec): min=40903, max=46169, avg=41391.71, stdev=573.97 00:42:57.197 clat percentiles (usec): 00:42:57.197 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:57.197 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:57.197 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:57.197 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:42:57.197 | 99.99th=[45876] 00:42:57.197 bw ( KiB/s): min= 383, max= 416, per=99.64%, avg=385.55, stdev= 7.17, samples=20 00:42:57.197 iops : min= 95, max= 104, avg=96.35, stdev= 1.81, samples=20 00:42:57.197 lat (msec) : 50=100.00% 00:42:57.197 cpu : usr=91.73%, sys=7.99%, ctx=66, majf=0, minf=0 00:42:57.197 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.197 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.197 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:57.197 00:42:57.197 Run status group 0 (all jobs): 00:42:57.197 READ: bw=386KiB/s (396kB/s), 386KiB/s-386KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10021-10021msec 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 00:42:57.197 real 0m11.174s 00:42:57.197 user 0m16.318s 00:42:57.197 sys 0m1.112s 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 ************************************ 00:42:57.197 END TEST fio_dif_1_default 00:42:57.197 ************************************ 00:42:57.197 22:48:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:57.197 22:48:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:57.197 22:48:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 ************************************ 00:42:57.197 START TEST fio_dif_1_multi_subsystems 00:42:57.197 ************************************ 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 bdev_null0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 [2024-12-16 22:48:45.418374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 bdev_null1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:57.197 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:57.197 { 00:42:57.197 "params": { 00:42:57.197 "name": "Nvme$subsystem", 00:42:57.197 "trtype": "$TEST_TRANSPORT", 00:42:57.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:57.198 "adrfam": "ipv4", 00:42:57.198 "trsvcid": "$NVMF_PORT", 00:42:57.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:57.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:57.198 "hdgst": ${hdgst:-false}, 00:42:57.198 "ddgst": ${ddgst:-false} 00:42:57.198 }, 00:42:57.198 "method": "bdev_nvme_attach_controller" 00:42:57.198 } 00:42:57.198 EOF 00:42:57.198 )") 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:57.198 { 00:42:57.198 "params": { 00:42:57.198 "name": "Nvme$subsystem", 00:42:57.198 "trtype": "$TEST_TRANSPORT", 00:42:57.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:57.198 "adrfam": "ipv4", 00:42:57.198 "trsvcid": "$NVMF_PORT", 00:42:57.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:57.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:57.198 "hdgst": ${hdgst:-false}, 00:42:57.198 "ddgst": ${ddgst:-false} 00:42:57.198 }, 00:42:57.198 "method": "bdev_nvme_attach_controller" 00:42:57.198 } 00:42:57.198 EOF 00:42:57.198 )") 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:57.198 "params": { 00:42:57.198 "name": "Nvme0", 00:42:57.198 "trtype": "tcp", 00:42:57.198 "traddr": "10.0.0.2", 00:42:57.198 "adrfam": "ipv4", 00:42:57.198 "trsvcid": "4420", 00:42:57.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:57.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:57.198 "hdgst": false, 00:42:57.198 "ddgst": false 00:42:57.198 }, 00:42:57.198 "method": "bdev_nvme_attach_controller" 00:42:57.198 },{ 00:42:57.198 "params": { 00:42:57.198 "name": "Nvme1", 00:42:57.198 "trtype": "tcp", 00:42:57.198 "traddr": "10.0.0.2", 00:42:57.198 "adrfam": "ipv4", 00:42:57.198 "trsvcid": "4420", 00:42:57.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:57.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:57.198 "hdgst": false, 00:42:57.198 "ddgst": false 00:42:57.198 }, 00:42:57.198 "method": "bdev_nvme_attach_controller" 00:42:57.198 }' 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:57.198 22:48:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:57.198 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:57.198 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:57.198 fio-3.35 00:42:57.198 Starting 2 threads 00:43:07.176 00:43:07.176 filename0: (groupid=0, jobs=1): err= 0: pid=636717: Mon Dec 16 22:48:56 2024 00:43:07.176 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:43:07.176 slat (nsec): min=6003, max=30707, avg=7585.57, stdev=2427.25 00:43:07.176 clat (usec): min=40801, max=42001, avg=40988.88, stdev=112.91 00:43:07.176 lat (usec): min=40807, max=42012, avg=40996.47, stdev=113.32 00:43:07.176 clat percentiles (usec): 00:43:07.176 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:07.176 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:07.176 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:07.176 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:07.176 | 99.99th=[42206] 00:43:07.176 bw ( KiB/s): min= 384, max= 416, per=33.67%, avg=388.80, stdev=11.72, samples=20 00:43:07.176 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:07.176 lat (msec) : 50=100.00% 00:43:07.176 cpu : usr=96.96%, sys=2.79%, ctx=14, majf=0, minf=105 00:43:07.176 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.176 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.176 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:07.176 filename1: (groupid=0, jobs=1): err= 0: pid=636718: Mon Dec 16 22:48:56 2024 00:43:07.176 read: IOPS=190, BW=763KiB/s (782kB/s)(7664KiB/10040msec) 00:43:07.176 slat (nsec): min=5949, max=25484, avg=6987.81, stdev=1951.51 00:43:07.176 clat (usec): min=414, max=42643, avg=20940.01, stdev=20572.04 00:43:07.176 lat (usec): min=421, max=42649, avg=20947.00, stdev=20571.45 00:43:07.176 clat percentiles (usec): 00:43:07.176 | 1.00th=[ 453], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 482], 00:43:07.176 | 30.00th=[ 490], 40.00th=[ 506], 50.00th=[ 644], 60.00th=[41157], 00:43:07.176 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:43:07.176 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:43:07.176 | 99.99th=[42730] 00:43:07.176 bw ( KiB/s): min= 704, max= 832, per=66.31%, avg=764.80, stdev=32.67, samples=20 00:43:07.176 iops : min= 176, max= 208, avg=191.20, stdev= 8.17, samples=20 00:43:07.176 lat (usec) : 500=38.99%, 750=11.33% 00:43:07.176 lat (msec) : 50=49.69% 00:43:07.176 cpu : usr=96.99%, sys=2.76%, ctx=9, majf=0, minf=152 00:43:07.176 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.176 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.176 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:07.176 00:43:07.176 Run status group 0 (all jobs): 00:43:07.176 READ: bw=1152KiB/s (1180kB/s), 390KiB/s-763KiB/s (399kB/s-782kB/s), io=11.3MiB (11.8MB), run=10007-10040msec 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.176 00:43:07.176 real 0m11.446s 00:43:07.176 user 0m27.013s 00:43:07.176 sys 0m0.881s 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:07.176 22:48:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:07.176 ************************************ 00:43:07.176 END TEST fio_dif_1_multi_subsystems 00:43:07.176 ************************************ 00:43:07.176 22:48:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:07.176 22:48:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:07.176 22:48:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:07.176 22:48:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:07.435 ************************************ 00:43:07.435 START TEST fio_dif_rand_params 00:43:07.435 ************************************ 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.435 bdev_null0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:07.435 [2024-12-16 22:48:56.938039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:07.435 { 00:43:07.435 "params": { 00:43:07.435 "name": "Nvme$subsystem", 00:43:07.435 "trtype": "$TEST_TRANSPORT", 00:43:07.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.435 "adrfam": "ipv4", 00:43:07.435 "trsvcid": "$NVMF_PORT", 00:43:07.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.435 "hdgst": ${hdgst:-false}, 00:43:07.435 "ddgst": ${ddgst:-false} 00:43:07.435 }, 00:43:07.435 "method": "bdev_nvme_attach_controller" 00:43:07.435 } 00:43:07.435 EOF 00:43:07.435 )") 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:07.435 "params": { 00:43:07.435 "name": "Nvme0", 00:43:07.435 "trtype": "tcp", 00:43:07.435 "traddr": "10.0.0.2", 00:43:07.435 "adrfam": "ipv4", 00:43:07.435 "trsvcid": "4420", 00:43:07.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.435 "hdgst": false, 00:43:07.435 "ddgst": false 00:43:07.435 }, 00:43:07.435 "method": "bdev_nvme_attach_controller" 00:43:07.435 }' 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:07.435 22:48:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:07.435 22:48:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:07.435 22:48:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:07.435 22:48:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:07.435 22:48:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:07.693 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:07.693 ... 00:43:07.693 fio-3.35 00:43:07.693 Starting 3 threads 00:43:14.253 00:43:14.253 filename0: (groupid=0, jobs=1): err= 0: pid=638520: Mon Dec 16 22:49:02 2024 00:43:14.253 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(195MiB/5002msec) 00:43:14.253 slat (nsec): min=6405, max=75920, avg=12580.10, stdev=4904.26 00:43:14.253 clat (usec): min=3631, max=50804, avg=9622.22, stdev=4897.93 00:43:14.253 lat (usec): min=3638, max=50811, avg=9634.80, stdev=4898.04 00:43:14.253 clat percentiles (usec): 00:43:14.253 | 1.00th=[ 3851], 5.00th=[ 5276], 10.00th=[ 6128], 20.00th=[ 6915], 00:43:14.253 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:43:14.253 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11731], 95.00th=[12387], 00:43:14.253 | 99.00th=[45351], 99.50th=[46924], 99.90th=[50594], 99.95th=[50594], 00:43:14.253 | 99.99th=[50594] 00:43:14.253 bw ( KiB/s): min=33280, max=56064, per=34.13%, avg=39808.00, stdev=6505.79, samples=10 00:43:14.253 iops : min= 260, max= 438, avg=311.00, stdev=50.83, samples=10 00:43:14.253 lat (msec) : 4=1.73%, 10=56.84%, 20=40.08%, 50=0.96%, 100=0.39% 00:43:14.253 cpu : usr=95.86%, sys=3.80%, ctx=14, majf=0, minf=100 00:43:14.253 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.253 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.253 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:14.253 filename0: (groupid=0, jobs=1): err= 0: pid=638521: Mon Dec 16 22:49:02 2024 00:43:14.253 read: IOPS=316, BW=39.6MiB/s (41.5MB/s)(200MiB/5043msec) 00:43:14.253 slat (nsec): min=6243, max=70600, avg=12453.33, stdev=4666.73 00:43:14.253 clat (usec): min=3007, max=49371, avg=9426.11, stdev=4934.81 00:43:14.253 lat (usec): min=3014, max=49382, avg=9438.57, stdev=4934.85 00:43:14.253 clat percentiles (usec): 00:43:14.253 | 1.00th=[ 5145], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6849], 00:43:14.253 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:43:14.253 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11338], 95.00th=[11863], 00:43:14.253 | 99.00th=[45876], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:43:14.253 | 99.99th=[49546] 00:43:14.253 bw ( KiB/s): min=24320, max=57088, per=35.03%, avg=40857.60, stdev=8037.84, samples=10 00:43:14.253 iops : min= 190, max= 446, avg=319.20, stdev=62.80, samples=10 00:43:14.253 lat (msec) : 4=0.19%, 10=69.21%, 20=29.16%, 50=1.44% 00:43:14.253 cpu : usr=95.72%, sys=3.95%, ctx=18, majf=0, minf=44 00:43:14.253 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.253 issued rwts: total=1598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.253 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:14.253 filename0: (groupid=0, jobs=1): err= 0: pid=638522: Mon Dec 16 22:49:02 2024 00:43:14.253 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(180MiB/5006msec) 00:43:14.253 slat (nsec): min=6262, max=80230, avg=13065.13, stdev=5620.19 00:43:14.253 clat (usec): min=3715, max=51552, avg=10410.96, stdev=8564.52 00:43:14.253 lat (usec): min=3724, max=51565, avg=10424.03, stdev=8564.38 00:43:14.253 clat percentiles (usec): 00:43:14.253 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7635], 00:43:14.253 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8848], 00:43:14.253 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[12518], 00:43:14.253 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:43:14.253 | 99.99th=[51643] 00:43:14.253 bw ( KiB/s): min=14080, max=47872, per=31.56%, avg=36812.80, stdev=12048.51, samples=10 00:43:14.253 iops : min= 110, max= 374, avg=287.60, stdev=94.13, samples=10 00:43:14.253 lat (msec) : 4=0.83%, 10=82.99%, 20=11.39%, 50=4.44%, 100=0.35% 00:43:14.253 cpu : usr=95.96%, sys=3.70%, ctx=15, majf=0, minf=56 00:43:14.253 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.253 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.253 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:14.253 00:43:14.253 Run status group 0 (all jobs): 00:43:14.253 READ: bw=114MiB/s (119MB/s), 36.0MiB/s-39.6MiB/s (37.7MB/s-41.5MB/s), io=574MiB (602MB), run=5002-5043msec 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 bdev_null0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 [2024-12-16 22:49:03.089183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 bdev_null1 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.253 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.254 bdev_null2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:14.254 { 00:43:14.254 "params": { 00:43:14.254 "name": "Nvme$subsystem", 00:43:14.254 "trtype": "$TEST_TRANSPORT", 00:43:14.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:14.254 "adrfam": "ipv4", 00:43:14.254 "trsvcid": "$NVMF_PORT", 00:43:14.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:14.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:14.254 "hdgst": ${hdgst:-false}, 00:43:14.254 "ddgst": ${ddgst:-false} 00:43:14.254 }, 00:43:14.254 "method": "bdev_nvme_attach_controller" 00:43:14.254 } 00:43:14.254 EOF 00:43:14.254 )") 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:14.254 { 00:43:14.254 "params": { 00:43:14.254 "name": "Nvme$subsystem", 00:43:14.254 "trtype": "$TEST_TRANSPORT", 00:43:14.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:14.254 "adrfam": "ipv4", 00:43:14.254 "trsvcid": "$NVMF_PORT", 00:43:14.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:14.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:14.254 "hdgst": ${hdgst:-false}, 00:43:14.254 "ddgst": ${ddgst:-false} 00:43:14.254 }, 00:43:14.254 "method": "bdev_nvme_attach_controller" 00:43:14.254 } 00:43:14.254 EOF 00:43:14.254 )") 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:14.254 { 00:43:14.254 "params": { 00:43:14.254 "name": "Nvme$subsystem", 00:43:14.254 "trtype": "$TEST_TRANSPORT", 00:43:14.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:14.254 "adrfam": "ipv4", 00:43:14.254 "trsvcid": "$NVMF_PORT", 00:43:14.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:14.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:14.254 "hdgst": ${hdgst:-false}, 00:43:14.254 "ddgst": ${ddgst:-false} 00:43:14.254 }, 00:43:14.254 "method": "bdev_nvme_attach_controller" 00:43:14.254 } 00:43:14.254 EOF 00:43:14.254 )") 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:14.254 "params": { 00:43:14.254 "name": "Nvme0", 00:43:14.254 "trtype": "tcp", 00:43:14.254 "traddr": "10.0.0.2", 00:43:14.254 "adrfam": "ipv4", 00:43:14.254 "trsvcid": "4420", 00:43:14.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:14.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:14.254 "hdgst": false, 00:43:14.254 "ddgst": false 00:43:14.254 }, 00:43:14.254 "method": "bdev_nvme_attach_controller" 00:43:14.254 },{ 00:43:14.254 "params": { 00:43:14.254 "name": "Nvme1", 00:43:14.254 "trtype": "tcp", 00:43:14.254 "traddr": "10.0.0.2", 00:43:14.254 "adrfam": "ipv4", 00:43:14.254 "trsvcid": "4420", 00:43:14.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:14.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:14.254 "hdgst": false, 00:43:14.254 "ddgst": false 00:43:14.254 }, 00:43:14.254 "method": "bdev_nvme_attach_controller" 00:43:14.254 },{ 00:43:14.254 "params": { 00:43:14.254 "name": "Nvme2", 00:43:14.254 "trtype": "tcp", 00:43:14.254 "traddr": "10.0.0.2", 00:43:14.254 "adrfam": "ipv4", 00:43:14.254 "trsvcid": "4420", 00:43:14.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:14.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:14.254 "hdgst": false, 00:43:14.254 "ddgst": false 00:43:14.254 }, 00:43:14.254 "method": "bdev_nvme_attach_controller" 00:43:14.254 }' 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:14.254 22:49:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:14.254 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:14.254 ... 00:43:14.254 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:14.254 ... 00:43:14.254 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:14.254 ... 00:43:14.254 fio-3.35 00:43:14.254 Starting 24 threads 00:43:26.452 00:43:26.452 filename0: (groupid=0, jobs=1): err= 0: pid=639757: Mon Dec 16 22:49:14 2024 00:43:26.452 read: IOPS=520, BW=2082KiB/s (2132kB/s)(20.4MiB/10022msec) 00:43:26.452 slat (usec): min=7, max=116, avg=50.39, stdev=19.24 00:43:26.452 clat (msec): min=10, max=377, avg=30.31, stdev=29.00 00:43:26.452 lat (msec): min=10, max=377, avg=30.36, stdev=29.00 00:43:26.452 clat percentiles (msec): 00:43:26.452 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.452 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.452 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.452 | 99.00th=[ 32], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 380], 00:43:26.452 | 99.99th=[ 380] 00:43:26.452 bw ( KiB/s): min= 256, max= 2560, per=4.19%, avg=2078.70, stdev=558.47, samples=20 00:43:26.452 iops : min= 64, max= 640, avg=519.50, stdev=139.60, samples=20 00:43:26.452 lat (msec) : 20=1.23%, 50=97.85%, 500=0.92% 00:43:26.452 cpu : usr=98.91%, sys=0.68%, ctx=19, majf=0, minf=61 00:43:26.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.452 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.452 filename0: (groupid=0, jobs=1): err= 0: pid=639758: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10004msec) 00:43:26.453 slat (nsec): min=6948, max=91104, avg=34076.30, stdev=20370.19 00:43:26.453 clat (msec): min=4, max=555, avg=30.44, stdev=36.14 00:43:26.453 lat (msec): min=4, max=555, avg=30.47, stdev=36.14 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 32], 99.50th=[ 409], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.453 | 99.99th=[ 558] 00:43:26.453 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=2066.95, stdev=591.26, samples=19 00:43:26.453 iops : min= 32, max= 640, avg=516.58, stdev=147.74, samples=19 00:43:26.453 lat (msec) : 10=0.19%, 20=1.35%, 50=97.85%, 500=0.31%, 750=0.31% 00:43:26.453 cpu : usr=98.86%, sys=0.75%, ctx=13, majf=0, minf=28 00:43:26.453 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.453 filename0: (groupid=0, jobs=1): err= 0: pid=639759: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10004msec) 00:43:26.453 slat (nsec): min=8147, max=98369, avg=33135.82, stdev=20528.71 00:43:26.453 clat (msec): min=9, max=556, avg=30.45, stdev=36.17 00:43:26.453 lat (msec): min=9, max=556, avg=30.48, stdev=36.17 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 32], 99.50th=[ 409], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.453 | 99.99th=[ 558] 00:43:26.453 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=2066.95, stdev=591.26, samples=19 00:43:26.453 iops : min= 32, max= 640, avg=516.58, stdev=147.74, samples=19 00:43:26.453 lat (msec) : 10=0.21%, 20=1.33%, 50=97.85%, 500=0.31%, 750=0.31% 00:43:26.453 cpu : usr=99.04%, sys=0.57%, ctx=13, majf=0, minf=60 00:43:26.453 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.453 filename0: (groupid=0, jobs=1): err= 0: pid=639760: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=517, BW=2069KiB/s (2119kB/s)(20.2MiB/10001msec) 00:43:26.453 slat (usec): min=6, max=109, avg=47.91, stdev=20.41 00:43:26.453 clat (msec): min=14, max=550, avg=30.46, stdev=32.92 00:43:26.453 lat (msec): min=14, max=550, avg=30.51, stdev=32.92 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 51], 99.50th=[ 380], 99.90th=[ 550], 99.95th=[ 550], 00:43:26.453 | 99.99th=[ 550] 00:43:26.453 bw ( KiB/s): min= 176, max= 2432, per=4.13%, avg=2050.00, stdev=593.34, samples=19 00:43:26.453 iops : min= 44, max= 608, avg=512.42, stdev=148.31, samples=19 00:43:26.453 lat (msec) : 20=0.70%, 50=98.36%, 100=0.21%, 500=0.54%, 750=0.19% 00:43:26.453 cpu : usr=98.85%, sys=0.74%, ctx=15, majf=0, minf=47 00:43:26.453 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 issued rwts: total=5174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.453 filename0: (groupid=0, jobs=1): err= 0: pid=639761: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10017msec) 00:43:26.453 slat (nsec): min=7643, max=92755, avg=32072.78, stdev=19405.48 00:43:26.453 clat (msec): min=16, max=557, avg=30.61, stdev=36.36 00:43:26.453 lat (msec): min=16, max=557, avg=30.64, stdev=36.36 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 32], 99.50th=[ 414], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.453 | 99.99th=[ 558] 00:43:26.453 bw ( KiB/s): min= 128, max= 2436, per=4.17%, avg=2066.45, stdev=590.54, samples=20 00:43:26.453 iops : min= 32, max= 609, avg=516.50, stdev=147.58, samples=20 00:43:26.453 lat (msec) : 20=0.75%, 50=98.63%, 500=0.31%, 750=0.31% 00:43:26.453 cpu : usr=98.95%, sys=0.65%, ctx=27, majf=0, minf=60 00:43:26.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:26.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.453 filename0: (groupid=0, jobs=1): err= 0: pid=639762: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10003msec) 00:43:26.453 slat (usec): min=6, max=109, avg=36.13, stdev=21.97 00:43:26.453 clat (msec): min=8, max=574, avg=30.46, stdev=32.02 00:43:26.453 lat (msec): min=8, max=574, avg=30.49, stdev=32.02 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 68], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:43:26.453 | 99.99th=[ 575] 00:43:26.453 bw ( KiB/s): min= 128, max= 2560, per=4.13%, avg=2050.00, stdev=594.71, samples=19 00:43:26.453 iops : min= 32, max= 640, avg=512.42, stdev=148.65, samples=19 00:43:26.453 lat (msec) : 10=0.31%, 20=1.91%, 50=96.55%, 100=0.31%, 250=0.31% 00:43:26.453 lat (msec) : 500=0.58%, 750=0.04% 00:43:26.453 cpu : usr=99.06%, sys=0.54%, ctx=13, majf=0, minf=43 00:43:26.453 IO depths : 1=5.9%, 2=11.9%, 4=24.5%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:26.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.453 filename0: (groupid=0, jobs=1): err= 0: pid=639763: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10011msec) 00:43:26.453 slat (usec): min=4, max=123, avg=43.91, stdev=22.96 00:43:26.453 clat (msec): min=12, max=432, avg=30.47, stdev=31.68 00:43:26.453 lat (msec): min=12, max=432, avg=30.51, stdev=31.68 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 35], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 435], 00:43:26.453 | 99.99th=[ 435] 00:43:26.453 bw ( KiB/s): min= 128, max= 2427, per=4.14%, avg=2053.47, stdev=593.05, samples=19 00:43:26.453 iops : min= 32, max= 606, avg=513.21, stdev=148.18, samples=19 00:43:26.453 lat (msec) : 20=0.60%, 50=98.48%, 250=0.31%, 500=0.62% 00:43:26.453 cpu : usr=99.05%, sys=0.55%, ctx=13, majf=0, minf=37 00:43:26.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.453 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.453 filename0: (groupid=0, jobs=1): err= 0: pid=639764: Mon Dec 16 22:49:14 2024 00:43:26.453 read: IOPS=519, BW=2078KiB/s (2128kB/s)(20.3MiB/10001msec) 00:43:26.453 slat (usec): min=3, max=122, avg=48.73, stdev=21.26 00:43:26.453 clat (msec): min=14, max=555, avg=30.34, stdev=33.03 00:43:26.453 lat (msec): min=14, max=555, avg=30.39, stdev=33.03 00:43:26.453 clat percentiles (msec): 00:43:26.453 | 1.00th=[ 19], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.453 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.453 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.453 | 99.00th=[ 51], 99.50th=[ 380], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.453 | 99.99th=[ 558] 00:43:26.454 bw ( KiB/s): min= 176, max= 2432, per=4.15%, avg=2059.21, stdev=592.51, samples=19 00:43:26.454 iops : min= 44, max= 608, avg=514.68, stdev=148.10, samples=19 00:43:26.454 lat (msec) : 20=1.81%, 50=96.96%, 100=0.50%, 500=0.54%, 750=0.19% 00:43:26.454 cpu : usr=98.93%, sys=0.67%, ctx=15, majf=0, minf=36 00:43:26.454 IO depths : 1=5.6%, 2=11.6%, 4=24.4%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639765: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10013msec) 00:43:26.454 slat (usec): min=4, max=111, avg=44.63, stdev=22.13 00:43:26.454 clat (msec): min=12, max=432, avg=30.47, stdev=31.72 00:43:26.454 lat (msec): min=13, max=432, avg=30.52, stdev=31.72 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.454 | 99.00th=[ 32], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 435], 00:43:26.454 | 99.99th=[ 435] 00:43:26.454 bw ( KiB/s): min= 128, max= 2432, per=4.14%, avg=2053.95, stdev=599.30, samples=19 00:43:26.454 iops : min= 32, max= 608, avg=513.37, stdev=149.76, samples=19 00:43:26.454 lat (msec) : 20=0.58%, 50=98.50%, 250=0.31%, 500=0.62% 00:43:26.454 cpu : usr=98.88%, sys=0.73%, ctx=13, majf=0, minf=32 00:43:26.454 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639766: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=517, BW=2072KiB/s (2122kB/s)(20.2MiB/10004msec) 00:43:26.454 slat (usec): min=6, max=110, avg=49.05, stdev=21.24 00:43:26.454 clat (msec): min=4, max=380, avg=30.44, stdev=29.11 00:43:26.454 lat (msec): min=4, max=380, avg=30.48, stdev=29.11 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.454 | 99.00th=[ 51], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 380], 00:43:26.454 | 99.99th=[ 380] 00:43:26.454 bw ( KiB/s): min= 256, max= 2416, per=4.13%, avg=2047.47, stdev=581.50, samples=19 00:43:26.454 iops : min= 64, max= 604, avg=511.79, stdev=145.35, samples=19 00:43:26.454 lat (msec) : 10=0.27%, 20=0.60%, 50=98.21%, 500=0.93% 00:43:26.454 cpu : usr=99.02%, sys=0.59%, ctx=13, majf=0, minf=41 00:43:26.454 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639767: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=520, BW=2082KiB/s (2132kB/s)(20.4MiB/10022msec) 00:43:26.454 slat (nsec): min=6718, max=91718, avg=16023.72, stdev=12205.10 00:43:26.454 clat (msec): min=10, max=510, avg=30.61, stdev=29.55 00:43:26.454 lat (msec): min=10, max=510, avg=30.62, stdev=29.55 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.454 | 99.00th=[ 35], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 380], 00:43:26.454 | 99.99th=[ 510] 00:43:26.454 bw ( KiB/s): min= 240, max= 2565, per=4.19%, avg=2078.95, stdev=560.64, samples=20 00:43:26.454 iops : min= 60, max= 641, avg=519.55, stdev=140.13, samples=20 00:43:26.454 lat (msec) : 20=1.23%, 50=97.85%, 250=0.04%, 500=0.84%, 750=0.04% 00:43:26.454 cpu : usr=98.48%, sys=1.13%, ctx=22, majf=0, minf=64 00:43:26.454 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639768: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=518, BW=2074KiB/s (2124kB/s)(20.3MiB/10010msec) 00:43:26.454 slat (usec): min=6, max=101, avg=22.71, stdev=14.56 00:43:26.454 clat (msec): min=16, max=509, avg=30.67, stdev=31.59 00:43:26.454 lat (msec): min=16, max=509, avg=30.70, stdev=31.59 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.454 | 99.00th=[ 43], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 435], 00:43:26.454 | 99.99th=[ 510] 00:43:26.454 bw ( KiB/s): min= 128, max= 2432, per=4.15%, avg=2056.53, stdev=583.80, samples=19 00:43:26.454 iops : min= 32, max= 608, avg=514.05, stdev=145.91, samples=19 00:43:26.454 lat (msec) : 20=0.89%, 50=98.19%, 250=0.31%, 500=0.58%, 750=0.04% 00:43:26.454 cpu : usr=98.71%, sys=0.92%, ctx=16, majf=0, minf=48 00:43:26.454 IO depths : 1=5.1%, 2=11.0%, 4=24.5%, 8=52.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639769: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10004msec) 00:43:26.454 slat (nsec): min=7402, max=93776, avg=27750.42, stdev=17350.72 00:43:26.454 clat (msec): min=9, max=656, avg=30.55, stdev=36.44 00:43:26.454 lat (msec): min=9, max=656, avg=30.58, stdev=36.44 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.454 | 99.00th=[ 32], 99.50th=[ 409], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.454 | 99.99th=[ 659] 00:43:26.454 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=2066.95, stdev=591.26, samples=19 00:43:26.454 iops : min= 32, max= 640, avg=516.58, stdev=147.74, samples=19 00:43:26.454 lat (msec) : 10=0.10%, 20=1.44%, 50=97.85%, 500=0.31%, 750=0.31% 00:43:26.454 cpu : usr=98.46%, sys=0.94%, ctx=54, majf=0, minf=63 00:43:26.454 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639770: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=518, BW=2073KiB/s (2122kB/s)(20.2MiB/10005msec) 00:43:26.454 slat (nsec): min=6140, max=95902, avg=43563.24, stdev=15604.52 00:43:26.454 clat (msec): min=4, max=534, avg=30.52, stdev=29.84 00:43:26.454 lat (msec): min=4, max=534, avg=30.56, stdev=29.84 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.454 | 99.00th=[ 50], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 384], 00:43:26.454 | 99.99th=[ 535] 00:43:26.454 bw ( KiB/s): min= 240, max= 2432, per=4.13%, avg=2047.47, stdev=584.22, samples=19 00:43:26.454 iops : min= 60, max= 608, avg=511.79, stdev=146.03, samples=19 00:43:26.454 lat (msec) : 10=0.31%, 20=0.56%, 50=98.17%, 100=0.08%, 500=0.85% 00:43:26.454 lat (msec) : 750=0.04% 00:43:26.454 cpu : usr=98.72%, sys=0.90%, ctx=16, majf=0, minf=39 00:43:26.454 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639771: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=514, BW=2058KiB/s (2108kB/s)(20.2MiB/10043msec) 00:43:26.454 slat (nsec): min=4466, max=83956, avg=32246.98, stdev=17888.77 00:43:26.454 clat (msec): min=10, max=382, avg=30.71, stdev=29.36 00:43:26.454 lat (msec): min=10, max=382, avg=30.74, stdev=29.36 00:43:26.454 clat percentiles (msec): 00:43:26.454 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.454 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.454 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.454 | 99.00th=[ 52], 99.50th=[ 330], 99.90th=[ 384], 99.95th=[ 384], 00:43:26.454 | 99.99th=[ 384] 00:43:26.454 bw ( KiB/s): min= 256, max= 2432, per=4.13%, avg=2046.37, stdev=584.30, samples=19 00:43:26.454 iops : min= 64, max= 608, avg=511.47, stdev=146.03, samples=19 00:43:26.454 lat (msec) : 20=0.70%, 50=98.07%, 100=0.31%, 500=0.93% 00:43:26.454 cpu : usr=98.63%, sys=0.99%, ctx=17, majf=0, minf=47 00:43:26.454 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:43:26.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.454 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.454 filename1: (groupid=0, jobs=1): err= 0: pid=639772: Mon Dec 16 22:49:14 2024 00:43:26.454 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.2MiB/10001msec) 00:43:26.454 slat (nsec): min=5085, max=86768, avg=37956.91, stdev=16034.83 00:43:26.455 clat (msec): min=19, max=431, avg=30.64, stdev=31.69 00:43:26.455 lat (msec): min=19, max=431, avg=30.68, stdev=31.69 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.455 | 99.00th=[ 32], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:43:26.455 | 99.99th=[ 430] 00:43:26.455 bw ( KiB/s): min= 128, max= 2432, per=4.14%, avg=2053.74, stdev=599.21, samples=19 00:43:26.455 iops : min= 32, max= 608, avg=513.32, stdev=149.73, samples=19 00:43:26.455 lat (msec) : 20=0.23%, 50=98.84%, 250=0.31%, 500=0.62% 00:43:26.455 cpu : usr=98.00%, sys=1.38%, ctx=58, majf=0, minf=36 00:43:26.455 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639773: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=519, BW=2080KiB/s (2130kB/s)(20.3MiB/10001msec) 00:43:26.455 slat (nsec): min=7146, max=57775, avg=14451.25, stdev=6337.07 00:43:26.455 clat (msec): min=7, max=555, avg=30.65, stdev=36.14 00:43:26.455 lat (msec): min=7, max=555, avg=30.66, stdev=36.14 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.455 | 99.00th=[ 32], 99.50th=[ 409], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.455 | 99.99th=[ 558] 00:43:26.455 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=2067.26, stdev=597.39, samples=19 00:43:26.455 iops : min= 32, max= 640, avg=516.68, stdev=149.32, samples=19 00:43:26.455 lat (msec) : 10=0.04%, 20=1.50%, 50=97.85%, 500=0.31%, 750=0.31% 00:43:26.455 cpu : usr=98.37%, sys=1.05%, ctx=111, majf=0, minf=36 00:43:26.455 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639774: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.2MiB/10001msec) 00:43:26.455 slat (nsec): min=4791, max=89760, avg=45540.95, stdev=14458.91 00:43:26.455 clat (msec): min=12, max=524, avg=30.57, stdev=29.87 00:43:26.455 lat (msec): min=12, max=524, avg=30.61, stdev=29.87 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.455 | 99.00th=[ 52], 99.50th=[ 330], 99.90th=[ 384], 99.95th=[ 401], 00:43:26.455 | 99.99th=[ 523] 00:43:26.455 bw ( KiB/s): min= 240, max= 2432, per=4.13%, avg=2047.21, stdev=587.27, samples=19 00:43:26.455 iops : min= 60, max= 608, avg=511.68, stdev=146.78, samples=19 00:43:26.455 lat (msec) : 20=0.56%, 50=98.20%, 100=0.35%, 500=0.85%, 750=0.04% 00:43:26.455 cpu : usr=98.38%, sys=1.01%, ctx=95, majf=0, minf=31 00:43:26.455 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639775: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=519, BW=2079KiB/s (2129kB/s)(20.3MiB/10006msec) 00:43:26.455 slat (usec): min=7, max=101, avg=27.24, stdev=13.53 00:43:26.455 clat (msec): min=9, max=556, avg=30.55, stdev=36.16 00:43:26.455 lat (msec): min=9, max=556, avg=30.58, stdev=36.16 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.455 | 99.00th=[ 32], 99.50th=[ 409], 99.90th=[ 558], 99.95th=[ 558], 00:43:26.455 | 99.99th=[ 558] 00:43:26.455 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=2066.95, stdev=591.26, samples=19 00:43:26.455 iops : min= 32, max= 640, avg=516.58, stdev=147.74, samples=19 00:43:26.455 lat (msec) : 10=0.06%, 20=1.48%, 50=97.85%, 500=0.31%, 750=0.31% 00:43:26.455 cpu : usr=98.24%, sys=1.15%, ctx=57, majf=0, minf=48 00:43:26.455 IO depths : 1=5.4%, 2=11.2%, 4=23.8%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639776: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.2MiB/10001msec) 00:43:26.455 slat (nsec): min=4108, max=92707, avg=40743.00, stdev=16592.70 00:43:26.455 clat (msec): min=13, max=381, avg=30.64, stdev=29.13 00:43:26.455 lat (msec): min=13, max=381, avg=30.68, stdev=29.13 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.455 | 99.00th=[ 53], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 380], 00:43:26.455 | 99.99th=[ 380] 00:43:26.455 bw ( KiB/s): min= 256, max= 2432, per=4.13%, avg=2047.00, stdev=584.87, samples=19 00:43:26.455 iops : min= 64, max= 608, avg=511.63, stdev=146.18, samples=19 00:43:26.455 lat (msec) : 20=0.58%, 50=98.18%, 100=0.31%, 500=0.93% 00:43:26.455 cpu : usr=97.96%, sys=1.30%, ctx=116, majf=0, minf=46 00:43:26.455 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639777: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=519, BW=2076KiB/s (2126kB/s)(20.3MiB/10018msec) 00:43:26.455 slat (nsec): min=4292, max=90477, avg=26317.73, stdev=17322.47 00:43:26.455 clat (msec): min=11, max=379, avg=30.62, stdev=29.00 00:43:26.455 lat (msec): min=11, max=379, avg=30.65, stdev=29.00 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.455 | 99.00th=[ 32], 99.50th=[ 330], 99.90th=[ 380], 99.95th=[ 380], 00:43:26.455 | 99.99th=[ 380] 00:43:26.455 bw ( KiB/s): min= 256, max= 2432, per=4.18%, avg=2072.65, stdev=565.54, samples=20 00:43:26.455 iops : min= 64, max= 608, avg=518.05, stdev=141.33, samples=20 00:43:26.455 lat (msec) : 20=0.77%, 50=98.31%, 500=0.92% 00:43:26.455 cpu : usr=98.46%, sys=1.06%, ctx=45, majf=0, minf=57 00:43:26.455 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639778: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10011msec) 00:43:26.455 slat (nsec): min=5531, max=89524, avg=37207.59, stdev=16651.50 00:43:26.455 clat (msec): min=13, max=573, avg=30.60, stdev=31.91 00:43:26.455 lat (msec): min=13, max=573, avg=30.64, stdev=31.91 00:43:26.455 clat percentiles (msec): 00:43:26.455 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.455 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.455 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.455 | 99.00th=[ 37], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:43:26.455 | 99.99th=[ 575] 00:43:26.455 bw ( KiB/s): min= 128, max= 2427, per=4.14%, avg=2053.47, stdev=593.05, samples=19 00:43:26.455 iops : min= 32, max= 606, avg=513.21, stdev=148.18, samples=19 00:43:26.455 lat (msec) : 20=0.60%, 50=98.48%, 250=0.31%, 500=0.58%, 750=0.04% 00:43:26.455 cpu : usr=98.79%, sys=0.83%, ctx=20, majf=0, minf=32 00:43:26.455 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:26.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.455 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.455 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.455 filename2: (groupid=0, jobs=1): err= 0: pid=639779: Mon Dec 16 22:49:14 2024 00:43:26.455 read: IOPS=515, BW=2062KiB/s (2112kB/s)(20.1MiB/10005msec) 00:43:26.455 slat (usec): min=4, max=105, avg=45.15, stdev=19.70 00:43:26.455 clat (msec): min=14, max=598, avg=30.60, stdev=37.18 00:43:26.455 lat (msec): min=14, max=598, avg=30.65, stdev=37.18 00:43:26.455 clat percentiles (msec): 00:43:26.456 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.456 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 28], 00:43:26.456 | 70.00th=[ 29], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:43:26.456 | 99.00th=[ 34], 99.50th=[ 384], 99.90th=[ 592], 99.95th=[ 592], 00:43:26.456 | 99.99th=[ 600] 00:43:26.456 bw ( KiB/s): min= 128, max= 2432, per=4.12%, avg=2043.00, stdev=607.71, samples=19 00:43:26.456 iops : min= 32, max= 608, avg=510.63, stdev=151.89, samples=19 00:43:26.456 lat (msec) : 20=0.79%, 50=98.27%, 100=0.31%, 500=0.31%, 750=0.31% 00:43:26.456 cpu : usr=98.68%, sys=0.73%, ctx=124, majf=0, minf=53 00:43:26.456 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:26.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.456 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.456 issued rwts: total=5158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.456 filename2: (groupid=0, jobs=1): err= 0: pid=639780: Mon Dec 16 22:49:14 2024 00:43:26.456 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10006msec) 00:43:26.456 slat (usec): min=4, max=109, avg=35.72, stdev=19.34 00:43:26.456 clat (msec): min=13, max=574, avg=30.57, stdev=31.97 00:43:26.456 lat (msec): min=13, max=574, avg=30.60, stdev=31.96 00:43:26.456 clat percentiles (msec): 00:43:26.456 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 27], 00:43:26.456 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 28], 00:43:26.456 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:26.456 | 99.00th=[ 43], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:43:26.456 | 99.99th=[ 575] 00:43:26.456 bw ( KiB/s): min= 128, max= 2480, per=4.16%, avg=2063.26, stdev=599.56, samples=19 00:43:26.456 iops : min= 32, max= 620, avg=515.74, stdev=149.85, samples=19 00:43:26.456 lat (msec) : 20=2.04%, 50=97.03%, 250=0.31%, 500=0.58%, 750=0.04% 00:43:26.456 cpu : usr=97.71%, sys=1.44%, ctx=227, majf=0, minf=46 00:43:26.456 IO depths : 1=5.4%, 2=11.3%, 4=24.0%, 8=52.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:43:26.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.456 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:26.456 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:26.456 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:26.456 00:43:26.456 Run status group 0 (all jobs): 00:43:26.456 READ: bw=48.4MiB/s (50.8MB/s), 2058KiB/s-2082KiB/s (2108kB/s-2132kB/s), io=486MiB (510MB), run=10001-10043msec 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 bdev_null0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 [2024-12-16 22:49:14.773733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 bdev_null1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.456 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:26.457 { 00:43:26.457 "params": { 00:43:26.457 "name": "Nvme$subsystem", 00:43:26.457 "trtype": "$TEST_TRANSPORT", 00:43:26.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:26.457 "adrfam": "ipv4", 00:43:26.457 "trsvcid": "$NVMF_PORT", 00:43:26.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:26.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:26.457 "hdgst": ${hdgst:-false}, 00:43:26.457 "ddgst": ${ddgst:-false} 00:43:26.457 }, 00:43:26.457 "method": "bdev_nvme_attach_controller" 00:43:26.457 } 00:43:26.457 EOF 00:43:26.457 )") 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:26.457 { 00:43:26.457 "params": { 00:43:26.457 "name": "Nvme$subsystem", 00:43:26.457 "trtype": "$TEST_TRANSPORT", 00:43:26.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:26.457 "adrfam": "ipv4", 00:43:26.457 "trsvcid": "$NVMF_PORT", 00:43:26.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:26.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:26.457 "hdgst": ${hdgst:-false}, 00:43:26.457 "ddgst": ${ddgst:-false} 00:43:26.457 }, 00:43:26.457 "method": "bdev_nvme_attach_controller" 00:43:26.457 } 00:43:26.457 EOF 00:43:26.457 )") 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:26.457 "params": { 00:43:26.457 "name": "Nvme0", 00:43:26.457 "trtype": "tcp", 00:43:26.457 "traddr": "10.0.0.2", 00:43:26.457 "adrfam": "ipv4", 00:43:26.457 "trsvcid": "4420", 00:43:26.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:26.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:26.457 "hdgst": false, 00:43:26.457 "ddgst": false 00:43:26.457 }, 00:43:26.457 "method": "bdev_nvme_attach_controller" 00:43:26.457 },{ 00:43:26.457 "params": { 00:43:26.457 "name": "Nvme1", 00:43:26.457 "trtype": "tcp", 00:43:26.457 "traddr": "10.0.0.2", 00:43:26.457 "adrfam": "ipv4", 00:43:26.457 "trsvcid": "4420", 00:43:26.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:26.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:26.457 "hdgst": false, 00:43:26.457 "ddgst": false 00:43:26.457 }, 00:43:26.457 "method": "bdev_nvme_attach_controller" 00:43:26.457 }' 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:26.457 22:49:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:26.457 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:26.457 ... 00:43:26.457 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:26.457 ... 00:43:26.457 fio-3.35 00:43:26.457 Starting 4 threads 00:43:31.721 00:43:31.721 filename0: (groupid=0, jobs=1): err= 0: pid=641672: Mon Dec 16 22:49:21 2024 00:43:31.721 read: IOPS=2582, BW=20.2MiB/s (21.2MB/s)(101MiB/5002msec) 00:43:31.721 slat (nsec): min=6177, max=47196, avg=8776.82, stdev=3158.82 00:43:31.721 clat (usec): min=747, max=5483, avg=3071.60, stdev=427.67 00:43:31.721 lat (usec): min=758, max=5489, avg=3080.38, stdev=427.42 00:43:31.721 clat percentiles (usec): 00:43:31.721 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2868], 00:43:31.721 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:43:31.721 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3785], 00:43:31.721 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5276], 00:43:31.721 | 99.99th=[ 5473] 00:43:31.721 bw ( KiB/s): min=19927, max=21408, per=24.50%, avg=20667.44, stdev=579.31, samples=9 00:43:31.721 iops : min= 2490, max= 2676, avg=2583.33, stdev=72.55, samples=9 00:43:31.721 lat (usec) : 750=0.01%, 1000=0.01% 00:43:31.721 lat (msec) : 2=0.67%, 4=95.67%, 10=3.65% 00:43:31.721 cpu : usr=95.78%, sys=3.92%, ctx=10, majf=0, minf=9 00:43:31.721 IO depths : 1=0.2%, 2=2.7%, 4=70.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:31.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.721 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.721 issued rwts: total=12916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:31.721 filename0: (groupid=0, jobs=1): err= 0: pid=641673: Mon Dec 16 22:49:21 2024 00:43:31.721 read: IOPS=2815, BW=22.0MiB/s (23.1MB/s)(111MiB/5042msec) 00:43:31.721 slat (usec): min=6, max=162, avg= 8.94, stdev= 3.36 00:43:31.721 clat (usec): min=646, max=43084, avg=2794.44, stdev=534.08 00:43:31.721 lat (usec): min=658, max=43095, avg=2803.39, stdev=533.90 00:43:31.721 clat percentiles (usec): 00:43:31.721 | 1.00th=[ 1713], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2474], 00:43:31.721 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2933], 00:43:31.721 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3392], 00:43:31.721 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[ 5014], 00:43:31.721 | 99.99th=[ 5473] 00:43:31.721 bw ( KiB/s): min=21584, max=24752, per=26.92%, avg=22712.00, stdev=1047.35, samples=10 00:43:31.721 iops : min= 2698, max= 3094, avg=2839.00, stdev=130.92, samples=10 00:43:31.721 lat (usec) : 750=0.01%, 1000=0.18% 00:43:31.721 lat (msec) : 2=2.23%, 4=96.53%, 10=1.06%, 50=0.01% 00:43:31.721 cpu : usr=95.26%, sys=4.40%, ctx=22, majf=0, minf=9 00:43:31.721 IO depths : 1=0.4%, 2=6.4%, 4=64.9%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:31.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.721 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.721 issued rwts: total=14196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:31.722 filename1: (groupid=0, jobs=1): err= 0: pid=641674: Mon Dec 16 22:49:21 2024 00:43:31.722 read: IOPS=2536, BW=19.8MiB/s (20.8MB/s)(99.1MiB/5001msec) 00:43:31.722 slat (nsec): min=6168, max=53721, avg=9003.33, stdev=3406.75 00:43:31.722 clat (usec): min=869, max=5629, avg=3127.44, stdev=438.21 00:43:31.722 lat (usec): min=875, max=5635, avg=3136.44, stdev=438.03 00:43:31.722 clat percentiles (usec): 00:43:31.722 | 1.00th=[ 2180], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 2933], 00:43:31.722 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:43:31.722 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 3982], 00:43:31.722 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5538], 00:43:31.722 | 99.99th=[ 5604] 00:43:31.722 bw ( KiB/s): min=19264, max=21136, per=23.96%, avg=20215.11, stdev=639.26, samples=9 00:43:31.722 iops : min= 2408, max= 2642, avg=2526.89, stdev=79.91, samples=9 00:43:31.722 lat (usec) : 1000=0.02% 00:43:31.722 lat (msec) : 2=0.43%, 4=94.55%, 10=4.99% 00:43:31.722 cpu : usr=96.46%, sys=3.22%, ctx=7, majf=0, minf=9 00:43:31.722 IO depths : 1=0.2%, 2=1.9%, 4=71.6%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:31.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.722 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.722 issued rwts: total=12683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:31.722 filename1: (groupid=0, jobs=1): err= 0: pid=641675: Mon Dec 16 22:49:21 2024 00:43:31.722 read: IOPS=2675, BW=20.9MiB/s (21.9MB/s)(105MiB/5001msec) 00:43:31.722 slat (nsec): min=6150, max=52608, avg=8965.55, stdev=3125.44 00:43:31.722 clat (usec): min=1073, max=5533, avg=2964.27, stdev=435.94 00:43:31.722 lat (usec): min=1086, max=5540, avg=2973.24, stdev=435.85 00:43:31.722 clat percentiles (usec): 00:43:31.722 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:43:31.722 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:31.722 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3752], 00:43:31.722 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5407], 00:43:31.722 | 99.99th=[ 5538] 00:43:31.722 bw ( KiB/s): min=20096, max=22352, per=25.44%, avg=21464.89, stdev=612.93, samples=9 00:43:31.722 iops : min= 2512, max= 2794, avg=2683.11, stdev=76.62, samples=9 00:43:31.722 lat (msec) : 2=1.01%, 4=95.92%, 10=3.07% 00:43:31.722 cpu : usr=95.48%, sys=4.18%, ctx=10, majf=0, minf=10 00:43:31.722 IO depths : 1=0.2%, 2=3.3%, 4=67.8%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:31.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.722 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:31.722 issued rwts: total=13380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:31.722 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:31.722 00:43:31.722 Run status group 0 (all jobs): 00:43:31.722 READ: bw=82.4MiB/s (86.4MB/s), 19.8MiB/s-22.0MiB/s (20.8MB/s-23.1MB/s), io=415MiB (436MB), run=5001-5042msec 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.722 00:43:31.722 real 0m24.516s 00:43:31.722 user 4m53.347s 00:43:31.722 sys 0m4.603s 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:31.722 22:49:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:31.722 ************************************ 00:43:31.722 END TEST fio_dif_rand_params 00:43:31.722 ************************************ 00:43:31.981 22:49:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:31.981 22:49:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:31.981 22:49:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:31.981 22:49:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:31.981 ************************************ 00:43:31.981 START TEST fio_dif_digest 00:43:31.981 ************************************ 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:31.981 bdev_null0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:31.981 [2024-12-16 22:49:21.522426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:31.981 { 00:43:31.981 "params": { 00:43:31.981 "name": "Nvme$subsystem", 00:43:31.981 "trtype": "$TEST_TRANSPORT", 00:43:31.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.981 "adrfam": "ipv4", 00:43:31.981 "trsvcid": "$NVMF_PORT", 00:43:31.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.981 "hdgst": ${hdgst:-false}, 00:43:31.981 "ddgst": ${ddgst:-false} 00:43:31.981 }, 00:43:31.981 "method": "bdev_nvme_attach_controller" 00:43:31.981 } 00:43:31.981 EOF 00:43:31.981 )") 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:31.981 "params": { 00:43:31.981 "name": "Nvme0", 00:43:31.981 "trtype": "tcp", 00:43:31.981 "traddr": "10.0.0.2", 00:43:31.981 "adrfam": "ipv4", 00:43:31.981 "trsvcid": "4420", 00:43:31.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:31.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:31.981 "hdgst": true, 00:43:31.981 "ddgst": true 00:43:31.981 }, 00:43:31.981 "method": "bdev_nvme_attach_controller" 00:43:31.981 }' 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:31.981 22:49:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:32.240 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:32.240 ... 00:43:32.240 fio-3.35 00:43:32.240 Starting 3 threads 00:43:44.443 00:43:44.443 filename0: (groupid=0, jobs=1): err= 0: pid=642706: Mon Dec 16 22:49:32 2024 00:43:44.443 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(358MiB/10048msec) 00:43:44.443 slat (nsec): min=6475, max=27022, avg=11295.00, stdev=1829.04 00:43:44.443 clat (usec): min=5362, max=52469, avg=10489.59, stdev=1291.37 00:43:44.443 lat (usec): min=5384, max=52477, avg=10500.88, stdev=1291.29 00:43:44.443 clat percentiles (usec): 00:43:44.443 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:43:44.443 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:43:44.443 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:43:44.443 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13566], 99.95th=[49021], 00:43:44.443 | 99.99th=[52691] 00:43:44.443 bw ( KiB/s): min=35328, max=37632, per=34.86%, avg=36659.20, stdev=542.10, samples=20 00:43:44.443 iops : min= 276, max= 294, avg=286.40, stdev= 4.24, samples=20 00:43:44.443 lat (msec) : 10=25.33%, 20=74.60%, 50=0.03%, 100=0.03% 00:43:44.443 cpu : usr=94.49%, sys=5.22%, ctx=17, majf=0, minf=85 00:43:44.443 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:44.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.443 issued rwts: total=2866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:44.443 filename0: (groupid=0, jobs=1): err= 0: pid=642707: Mon Dec 16 22:49:32 2024 00:43:44.443 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10046msec) 00:43:44.443 slat (nsec): min=6519, max=26184, avg=11537.62, stdev=1745.72 00:43:44.443 clat (usec): min=8761, max=49723, avg=10986.00, stdev=1247.91 00:43:44.443 lat (usec): min=8773, max=49735, avg=10997.53, stdev=1247.91 00:43:44.443 clat percentiles (usec): 00:43:44.443 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:43:44.443 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:43:44.443 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11994], 95.00th=[12256], 00:43:44.443 | 99.00th=[12911], 99.50th=[13042], 99.90th=[14091], 99.95th=[46924], 00:43:44.443 | 99.99th=[49546] 00:43:44.443 bw ( KiB/s): min=34048, max=35840, per=33.27%, avg=34995.20, stdev=492.08, samples=20 00:43:44.443 iops : min= 266, max= 280, avg=273.40, stdev= 3.84, samples=20 00:43:44.443 lat (msec) : 10=8.85%, 20=91.08%, 50=0.07% 00:43:44.443 cpu : usr=94.58%, sys=5.11%, ctx=18, majf=0, minf=91 00:43:44.443 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:44.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.443 issued rwts: total=2736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:44.443 filename0: (groupid=0, jobs=1): err= 0: pid=642708: Mon Dec 16 22:49:32 2024 00:43:44.443 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10045msec) 00:43:44.443 slat (nsec): min=6476, max=25248, avg=11538.77, stdev=1791.32 00:43:44.443 clat (usec): min=8905, max=49595, avg=11325.30, stdev=1283.97 00:43:44.443 lat (usec): min=8917, max=49606, avg=11336.84, stdev=1284.04 00:43:44.443 clat percentiles (usec): 00:43:44.443 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:43:44.443 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:43:44.443 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:43:44.443 | 99.00th=[13435], 99.50th=[13566], 99.90th=[15926], 99.95th=[46400], 00:43:44.443 | 99.99th=[49546] 00:43:44.443 bw ( KiB/s): min=33280, max=35584, per=32.28%, avg=33945.60, stdev=527.92, samples=20 00:43:44.443 iops : min= 260, max= 278, avg=265.20, stdev= 4.12, samples=20 00:43:44.443 lat (msec) : 10=4.90%, 20=95.03%, 50=0.08% 00:43:44.443 cpu : usr=94.74%, sys=4.96%, ctx=20, majf=0, minf=47 00:43:44.443 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:44.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:44.443 issued rwts: total=2654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:44.443 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:44.443 00:43:44.443 Run status group 0 (all jobs): 00:43:44.443 READ: bw=103MiB/s (108MB/s), 33.0MiB/s-35.7MiB/s (34.6MB/s-37.4MB/s), io=1032MiB (1082MB), run=10045-10048msec 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.443 00:43:44.443 real 0m11.168s 00:43:44.443 user 0m35.350s 00:43:44.443 sys 0m1.798s 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:44.443 22:49:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:44.443 ************************************ 00:43:44.443 END TEST fio_dif_digest 00:43:44.443 ************************************ 00:43:44.443 22:49:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:44.443 22:49:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:44.443 rmmod nvme_tcp 00:43:44.443 rmmod nvme_fabrics 00:43:44.443 rmmod nvme_keyring 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 634535 ']' 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 634535 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 634535 ']' 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 634535 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634535 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634535' 00:43:44.443 killing process with pid 634535 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@973 -- # kill 634535 00:43:44.443 22:49:32 nvmf_dif -- common/autotest_common.sh@978 -- # wait 634535 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:44.443 22:49:32 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:46.347 Waiting for block devices as requested 00:43:46.347 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:46.347 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:46.347 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:46.347 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:46.347 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:46.607 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:46.607 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:46.607 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:46.607 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:46.865 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:46.865 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:46.865 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:47.124 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:47.124 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:47.124 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:47.383 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:47.383 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:47.383 22:49:37 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:47.383 22:49:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:47.383 22:49:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:49.918 22:49:39 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:49.918 00:43:49.918 real 1m14.177s 00:43:49.918 user 7m11.681s 00:43:49.918 sys 0m20.179s 00:43:49.918 22:49:39 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:49.918 22:49:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:49.918 ************************************ 00:43:49.918 END TEST nvmf_dif 00:43:49.918 ************************************ 00:43:49.918 22:49:39 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:49.918 22:49:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:49.918 22:49:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:49.918 22:49:39 -- common/autotest_common.sh@10 -- # set +x 00:43:49.918 ************************************ 00:43:49.918 START TEST nvmf_abort_qd_sizes 00:43:49.918 ************************************ 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:49.918 * Looking for test storage... 00:43:49.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:49.918 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:49.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:49.919 --rc genhtml_branch_coverage=1 00:43:49.919 --rc genhtml_function_coverage=1 00:43:49.919 --rc genhtml_legend=1 00:43:49.919 --rc geninfo_all_blocks=1 00:43:49.919 --rc geninfo_unexecuted_blocks=1 00:43:49.919 00:43:49.919 ' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:49.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:49.919 --rc genhtml_branch_coverage=1 00:43:49.919 --rc genhtml_function_coverage=1 00:43:49.919 --rc genhtml_legend=1 00:43:49.919 --rc geninfo_all_blocks=1 00:43:49.919 --rc geninfo_unexecuted_blocks=1 00:43:49.919 00:43:49.919 ' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:49.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:49.919 --rc genhtml_branch_coverage=1 00:43:49.919 --rc genhtml_function_coverage=1 00:43:49.919 --rc genhtml_legend=1 00:43:49.919 --rc geninfo_all_blocks=1 00:43:49.919 --rc geninfo_unexecuted_blocks=1 00:43:49.919 00:43:49.919 ' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:49.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:49.919 --rc genhtml_branch_coverage=1 00:43:49.919 --rc genhtml_function_coverage=1 00:43:49.919 --rc genhtml_legend=1 00:43:49.919 --rc geninfo_all_blocks=1 00:43:49.919 --rc geninfo_unexecuted_blocks=1 00:43:49.919 00:43:49.919 ' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:49.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:49.919 22:49:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:56.488 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:56.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:56.488 Found net devices under 0000:af:00.0: cvl_0_0 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:56.488 Found net devices under 0000:af:00.1: cvl_0_1 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:56.488 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:56.489 22:49:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:56.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:56.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:43:56.489 00:43:56.489 --- 10.0.0.2 ping statistics --- 00:43:56.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:56.489 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:56.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:56.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:43:56.489 00:43:56.489 --- 10.0.0.1 ping statistics --- 00:43:56.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:56.489 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:56.489 22:49:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:58.393 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:58.393 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:59.328 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:59.328 22:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:59.328 22:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:59.328 22:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:59.328 22:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:59.328 22:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:59.328 22:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=650564 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 650564 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 650564 ']' 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:59.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:59.328 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:59.585 [2024-12-16 22:49:49.059542] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:59.585 [2024-12-16 22:49:49.059586] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:59.585 [2024-12-16 22:49:49.137980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:59.585 [2024-12-16 22:49:49.162716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:59.585 [2024-12-16 22:49:49.162750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:59.585 [2024-12-16 22:49:49.162757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:59.585 [2024-12-16 22:49:49.162763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:59.585 [2024-12-16 22:49:49.162768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:59.585 [2024-12-16 22:49:49.164039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:59.585 [2024-12-16 22:49:49.164146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:59.585 [2024-12-16 22:49:49.164180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:59.585 [2024-12-16 22:49:49.164181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:59.585 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:59.585 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:59.585 22:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:59.585 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:59.585 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:59.842 22:49:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:59.842 ************************************ 00:43:59.842 START TEST spdk_target_abort 00:43:59.843 ************************************ 00:43:59.843 22:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:59.843 22:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:59.843 22:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:59.843 22:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.843 22:49:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.120 spdk_targetn1 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.120 [2024-12-16 22:49:52.172478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.120 [2024-12-16 22:49:52.224788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:03.120 22:49:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:06.398 Initializing NVMe Controllers 00:44:06.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:06.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:06.398 Initialization complete. Launching workers. 00:44:06.398 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16666, failed: 0 00:44:06.398 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1454, failed to submit 15212 00:44:06.398 success 737, unsuccessful 717, failed 0 00:44:06.398 22:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:06.398 22:49:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:09.747 Initializing NVMe Controllers 00:44:09.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:09.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:09.747 Initialization complete. Launching workers. 00:44:09.747 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8373, failed: 0 00:44:09.747 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 7106 00:44:09.747 success 303, unsuccessful 964, failed 0 00:44:09.747 22:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:09.747 22:49:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:12.351 Initializing NVMe Controllers 00:44:12.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:12.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:12.351 Initialization complete. Launching workers. 00:44:12.351 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38477, failed: 0 00:44:12.351 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2884, failed to submit 35593 00:44:12.351 success 612, unsuccessful 2272, failed 0 00:44:12.351 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:12.351 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.351 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:12.608 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.608 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:12.608 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.608 22:50:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 650564 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 650564 ']' 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 650564 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650564 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650564' 00:44:13.979 killing process with pid 650564 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 650564 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 650564 00:44:13.979 00:44:13.979 real 0m14.205s 00:44:13.979 user 0m54.395s 00:44:13.979 sys 0m2.297s 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:13.979 ************************************ 00:44:13.979 END TEST spdk_target_abort 00:44:13.979 ************************************ 00:44:13.979 22:50:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:13.979 22:50:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:13.979 22:50:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:13.979 22:50:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:13.979 ************************************ 00:44:13.979 START TEST kernel_target_abort 00:44:13.979 ************************************ 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:13.979 22:50:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:17.266 Waiting for block devices as requested 00:44:17.266 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:17.266 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:17.266 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:17.266 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:17.266 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:17.266 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:17.266 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:17.266 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:17.524 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:17.524 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:17.524 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:17.525 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:17.783 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:17.783 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:17.783 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:18.042 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:18.042 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:18.042 No valid GPT data, bailing 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:18.042 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:44:18.301 00:44:18.301 Discovery Log Number of Records 2, Generation counter 2 00:44:18.301 =====Discovery Log Entry 0====== 00:44:18.301 trtype: tcp 00:44:18.301 adrfam: ipv4 00:44:18.301 subtype: current discovery subsystem 00:44:18.301 treq: not specified, sq flow control disable supported 00:44:18.301 portid: 1 00:44:18.301 trsvcid: 4420 00:44:18.301 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:18.301 traddr: 10.0.0.1 00:44:18.301 eflags: none 00:44:18.301 sectype: none 00:44:18.301 =====Discovery Log Entry 1====== 00:44:18.301 trtype: tcp 00:44:18.301 adrfam: ipv4 00:44:18.301 subtype: nvme subsystem 00:44:18.301 treq: not specified, sq flow control disable supported 00:44:18.301 portid: 1 00:44:18.301 trsvcid: 4420 00:44:18.301 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:18.301 traddr: 10.0.0.1 00:44:18.301 eflags: none 00:44:18.301 sectype: none 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:18.301 22:50:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:21.584 Initializing NVMe Controllers 00:44:21.584 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:21.584 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:21.584 Initialization complete. Launching workers. 00:44:21.584 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94433, failed: 0 00:44:21.584 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94433, failed to submit 0 00:44:21.584 success 0, unsuccessful 94433, failed 0 00:44:21.584 22:50:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:21.584 22:50:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:24.864 Initializing NVMe Controllers 00:44:24.864 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:24.864 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:24.864 Initialization complete. Launching workers. 00:44:24.864 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 152230, failed: 0 00:44:24.864 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38186, failed to submit 114044 00:44:24.864 success 0, unsuccessful 38186, failed 0 00:44:24.864 22:50:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:24.865 22:50:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:28.146 Initializing NVMe Controllers 00:44:28.146 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:28.146 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:28.146 Initialization complete. Launching workers. 00:44:28.146 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142451, failed: 0 00:44:28.146 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35686, failed to submit 106765 00:44:28.146 success 0, unsuccessful 35686, failed 0 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:28.146 22:50:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:30.681 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:30.681 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:31.248 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:31.507 00:44:31.507 real 0m17.378s 00:44:31.507 user 0m9.085s 00:44:31.507 sys 0m5.038s 00:44:31.507 22:50:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:31.507 22:50:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:31.507 ************************************ 00:44:31.507 END TEST kernel_target_abort 00:44:31.507 ************************************ 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:31.507 rmmod nvme_tcp 00:44:31.507 rmmod nvme_fabrics 00:44:31.507 rmmod nvme_keyring 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 650564 ']' 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 650564 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 650564 ']' 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 650564 00:44:31.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (650564) - No such process 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 650564 is not found' 00:44:31.507 Process with pid 650564 is not found 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:31.507 22:50:21 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:34.042 Waiting for block devices as requested 00:44:34.301 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:34.301 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:34.301 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:34.559 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:34.559 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:34.559 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:34.817 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:34.817 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:34.817 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:35.076 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:35.076 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:35.076 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:35.076 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:35.334 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:35.334 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:35.334 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:35.593 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:35.593 22:50:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.126 22:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:38.126 00:44:38.126 real 0m48.044s 00:44:38.126 user 1m7.797s 00:44:38.126 sys 0m15.957s 00:44:38.126 22:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:38.126 22:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:38.126 ************************************ 00:44:38.126 END TEST nvmf_abort_qd_sizes 00:44:38.126 ************************************ 00:44:38.126 22:50:27 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:38.126 22:50:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:38.126 22:50:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:38.126 22:50:27 -- common/autotest_common.sh@10 -- # set +x 00:44:38.126 ************************************ 00:44:38.126 START TEST keyring_file 00:44:38.126 ************************************ 00:44:38.126 22:50:27 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:38.126 * Looking for test storage... 00:44:38.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:38.126 22:50:27 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:38.126 22:50:27 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:38.126 22:50:27 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:38.126 22:50:27 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:38.126 22:50:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.127 --rc genhtml_branch_coverage=1 00:44:38.127 --rc genhtml_function_coverage=1 00:44:38.127 --rc genhtml_legend=1 00:44:38.127 --rc geninfo_all_blocks=1 00:44:38.127 --rc geninfo_unexecuted_blocks=1 00:44:38.127 00:44:38.127 ' 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.127 --rc genhtml_branch_coverage=1 00:44:38.127 --rc genhtml_function_coverage=1 00:44:38.127 --rc genhtml_legend=1 00:44:38.127 --rc geninfo_all_blocks=1 00:44:38.127 --rc geninfo_unexecuted_blocks=1 00:44:38.127 00:44:38.127 ' 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.127 --rc genhtml_branch_coverage=1 00:44:38.127 --rc genhtml_function_coverage=1 00:44:38.127 --rc genhtml_legend=1 00:44:38.127 --rc geninfo_all_blocks=1 00:44:38.127 --rc geninfo_unexecuted_blocks=1 00:44:38.127 00:44:38.127 ' 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:38.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.127 --rc genhtml_branch_coverage=1 00:44:38.127 --rc genhtml_function_coverage=1 00:44:38.127 --rc genhtml_legend=1 00:44:38.127 --rc geninfo_all_blocks=1 00:44:38.127 --rc geninfo_unexecuted_blocks=1 00:44:38.127 00:44:38.127 ' 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.127 22:50:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.127 22:50:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.127 22:50:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.127 22:50:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.127 22:50:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:38.127 22:50:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:38.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GgBjzVzwww 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GgBjzVzwww 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GgBjzVzwww 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GgBjzVzwww 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4yrxU3Nde4 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:38.127 22:50:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4yrxU3Nde4 00:44:38.127 22:50:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4yrxU3Nde4 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.4yrxU3Nde4 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=659143 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 659143 00:44:38.127 22:50:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659143 ']' 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:38.127 22:50:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:38.128 22:50:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:38.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:38.128 22:50:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:38.128 22:50:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:38.128 [2024-12-16 22:50:27.643791] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:38.128 [2024-12-16 22:50:27.643840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659143 ] 00:44:38.128 [2024-12-16 22:50:27.713702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:38.128 [2024-12-16 22:50:27.736561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:38.386 22:50:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:38.386 [2024-12-16 22:50:27.940928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:38.386 null0 00:44:38.386 [2024-12-16 22:50:27.972981] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:38.386 [2024-12-16 22:50:27.973268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.386 22:50:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.386 22:50:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:38.386 [2024-12-16 22:50:28.005061] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:38.386 request: 00:44:38.386 { 00:44:38.386 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:38.386 "secure_channel": false, 00:44:38.386 "listen_address": { 00:44:38.386 "trtype": "tcp", 00:44:38.386 "traddr": "127.0.0.1", 00:44:38.386 "trsvcid": "4420" 00:44:38.386 }, 00:44:38.386 "method": "nvmf_subsystem_add_listener", 00:44:38.386 "req_id": 1 00:44:38.386 } 00:44:38.386 Got JSON-RPC error response 00:44:38.386 response: 00:44:38.386 { 00:44:38.386 "code": -32602, 00:44:38.386 "message": "Invalid parameters" 00:44:38.386 } 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:38.386 22:50:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=659148 00:44:38.386 22:50:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 659148 /var/tmp/bperf.sock 00:44:38.386 22:50:28 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659148 ']' 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:38.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:38.386 22:50:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:38.386 [2024-12-16 22:50:28.059239] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:38.386 [2024-12-16 22:50:28.059280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659148 ] 00:44:38.644 [2024-12-16 22:50:28.130179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:38.644 [2024-12-16 22:50:28.151954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:38.644 22:50:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.644 22:50:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:38.644 22:50:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:38.644 22:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:38.902 22:50:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4yrxU3Nde4 00:44:38.902 22:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4yrxU3Nde4 00:44:39.159 22:50:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:39.159 22:50:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:39.159 22:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.159 22:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.159 22:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.159 22:50:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.GgBjzVzwww == \/\t\m\p\/\t\m\p\.\G\g\B\j\z\V\z\w\w\w ]] 00:44:39.159 22:50:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:39.159 22:50:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:39.159 22:50:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.159 22:50:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:39.159 22:50:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.417 22:50:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.4yrxU3Nde4 == \/\t\m\p\/\t\m\p\.\4\y\r\x\U\3\N\d\e\4 ]] 00:44:39.417 22:50:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:39.417 22:50:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:39.417 22:50:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.417 22:50:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.417 22:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.417 22:50:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.675 22:50:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:39.675 22:50:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:39.675 22:50:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:39.675 22:50:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.675 22:50:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.675 22:50:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:39.675 22:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.933 22:50:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:39.933 22:50:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:39.933 22:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:39.933 [2024-12-16 22:50:29.609743] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:40.190 nvme0n1 00:44:40.190 22:50:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.190 22:50:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:40.190 22:50:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:40.190 22:50:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.447 22:50:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:40.447 22:50:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:40.705 Running I/O for 1 seconds... 00:44:41.637 19282.00 IOPS, 75.32 MiB/s 00:44:41.637 Latency(us) 00:44:41.637 [2024-12-16T21:50:31.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:41.637 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:41.637 nvme0n1 : 1.00 19328.60 75.50 0.00 0.00 6610.07 2715.06 17725.93 00:44:41.637 [2024-12-16T21:50:31.338Z] =================================================================================================================== 00:44:41.637 [2024-12-16T21:50:31.338Z] Total : 19328.60 75.50 0.00 0.00 6610.07 2715.06 17725.93 00:44:41.637 { 00:44:41.637 "results": [ 00:44:41.637 { 00:44:41.637 "job": "nvme0n1", 00:44:41.637 "core_mask": "0x2", 00:44:41.637 "workload": "randrw", 00:44:41.637 "percentage": 50, 00:44:41.637 "status": "finished", 00:44:41.637 "queue_depth": 128, 00:44:41.637 "io_size": 4096, 00:44:41.637 "runtime": 1.004315, 00:44:41.637 "iops": 19328.597103498403, 00:44:41.637 "mibps": 75.50233243554064, 00:44:41.637 "io_failed": 0, 00:44:41.637 "io_timeout": 0, 00:44:41.637 "avg_latency_us": 6610.074987096837, 00:44:41.637 "min_latency_us": 2715.062857142857, 00:44:41.637 "max_latency_us": 17725.92761904762 00:44:41.637 } 00:44:41.637 ], 00:44:41.637 "core_count": 1 00:44:41.637 } 00:44:41.637 22:50:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:41.637 22:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:41.895 22:50:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:41.895 22:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:41.895 22:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:41.895 22:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:41.895 22:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:41.895 22:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.152 22:50:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:42.152 22:50:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:42.152 22:50:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:42.152 22:50:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.152 22:50:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:42.152 22:50:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.152 22:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.152 22:50:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:42.152 22:50:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:42.152 22:50:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:42.152 22:50:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:42.152 22:50:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:42.153 22:50:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:42.153 22:50:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:42.153 22:50:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:42.153 22:50:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:42.153 22:50:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:42.410 [2024-12-16 22:50:31.995122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:42.410 [2024-12-16 22:50:31.995198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790950 (107): Transport endpoint is not connected 00:44:42.410 [2024-12-16 22:50:31.996188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790950 (9): Bad file descriptor 00:44:42.410 [2024-12-16 22:50:31.997193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:42.410 [2024-12-16 22:50:31.997203] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:42.410 [2024-12-16 22:50:31.997211] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:42.410 [2024-12-16 22:50:31.997219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:42.410 request: 00:44:42.410 { 00:44:42.410 "name": "nvme0", 00:44:42.410 "trtype": "tcp", 00:44:42.410 "traddr": "127.0.0.1", 00:44:42.410 "adrfam": "ipv4", 00:44:42.410 "trsvcid": "4420", 00:44:42.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:42.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:42.410 "prchk_reftag": false, 00:44:42.410 "prchk_guard": false, 00:44:42.410 "hdgst": false, 00:44:42.410 "ddgst": false, 00:44:42.410 "psk": "key1", 00:44:42.410 "allow_unrecognized_csi": false, 00:44:42.410 "method": "bdev_nvme_attach_controller", 00:44:42.410 "req_id": 1 00:44:42.410 } 00:44:42.410 Got JSON-RPC error response 00:44:42.410 response: 00:44:42.410 { 00:44:42.410 "code": -5, 00:44:42.410 "message": "Input/output error" 00:44:42.410 } 00:44:42.410 22:50:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:42.410 22:50:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:42.410 22:50:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:42.410 22:50:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:42.410 22:50:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:42.410 22:50:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:42.410 22:50:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.410 22:50:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.410 22:50:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.410 22:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.668 22:50:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:42.668 22:50:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:42.668 22:50:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:42.668 22:50:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.668 22:50:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.668 22:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.668 22:50:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:42.925 22:50:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:42.925 22:50:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:42.925 22:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:42.925 22:50:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:42.925 22:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:43.182 22:50:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:43.182 22:50:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:43.182 22:50:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.439 22:50:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:43.439 22:50:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.GgBjzVzwww 00:44:43.439 22:50:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:43.439 22:50:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:43.439 22:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:43.697 [2024-12-16 22:50:33.195392] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GgBjzVzwww': 0100660 00:44:43.697 [2024-12-16 22:50:33.195418] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:43.697 request: 00:44:43.697 { 00:44:43.697 "name": "key0", 00:44:43.697 "path": "/tmp/tmp.GgBjzVzwww", 00:44:43.697 "method": "keyring_file_add_key", 00:44:43.697 "req_id": 1 00:44:43.697 } 00:44:43.697 Got JSON-RPC error response 00:44:43.697 response: 00:44:43.697 { 00:44:43.697 "code": -1, 00:44:43.697 "message": "Operation not permitted" 00:44:43.697 } 00:44:43.697 22:50:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:43.697 22:50:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:43.697 22:50:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:43.697 22:50:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:43.697 22:50:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.GgBjzVzwww 00:44:43.697 22:50:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:43.697 22:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgBjzVzwww 00:44:43.955 22:50:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.GgBjzVzwww 00:44:43.955 22:50:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:43.955 22:50:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:43.955 22:50:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.955 22:50:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.955 22:50:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:43.955 22:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.955 22:50:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:43.955 22:50:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:43.955 22:50:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:43.955 22:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.212 [2024-12-16 22:50:33.780944] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GgBjzVzwww': No such file or directory 00:44:44.212 [2024-12-16 22:50:33.780968] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:44.212 [2024-12-16 22:50:33.780984] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:44.212 [2024-12-16 22:50:33.780996] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:44.212 [2024-12-16 22:50:33.781002] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:44.212 [2024-12-16 22:50:33.781008] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:44.212 request: 00:44:44.212 { 00:44:44.212 "name": "nvme0", 00:44:44.212 "trtype": "tcp", 00:44:44.212 "traddr": "127.0.0.1", 00:44:44.212 "adrfam": "ipv4", 00:44:44.212 "trsvcid": "4420", 00:44:44.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:44.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:44.212 "prchk_reftag": false, 00:44:44.212 "prchk_guard": false, 00:44:44.212 "hdgst": false, 00:44:44.212 "ddgst": false, 00:44:44.212 "psk": "key0", 00:44:44.212 "allow_unrecognized_csi": false, 00:44:44.212 "method": "bdev_nvme_attach_controller", 00:44:44.212 "req_id": 1 00:44:44.212 } 00:44:44.212 Got JSON-RPC error response 00:44:44.212 response: 00:44:44.212 { 00:44:44.212 "code": -19, 00:44:44.212 "message": "No such device" 00:44:44.212 } 00:44:44.212 22:50:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:44.212 22:50:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:44.212 22:50:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:44.212 22:50:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:44.212 22:50:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:44.212 22:50:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:44.469 22:50:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wnXc1gKt1K 00:44:44.469 22:50:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:44.469 22:50:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:44.469 22:50:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:44.469 22:50:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:44.469 22:50:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:44.469 22:50:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:44.469 22:50:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:44.469 22:50:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wnXc1gKt1K 00:44:44.470 22:50:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wnXc1gKt1K 00:44:44.470 22:50:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.wnXc1gKt1K 00:44:44.470 22:50:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wnXc1gKt1K 00:44:44.470 22:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wnXc1gKt1K 00:44:44.727 22:50:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.727 22:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.984 nvme0n1 00:44:44.984 22:50:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:44.984 22:50:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:44.984 22:50:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:44.984 22:50:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:44.984 22:50:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:44.984 22:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.241 22:50:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:45.241 22:50:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:45.241 22:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:45.241 22:50:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:45.241 22:50:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:45.242 22:50:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.242 22:50:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:45.242 22:50:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.499 22:50:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:45.499 22:50:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:45.499 22:50:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.499 22:50:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:45.499 22:50:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.499 22:50:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:45.499 22:50:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.756 22:50:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:45.756 22:50:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:45.756 22:50:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:46.013 22:50:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:46.013 22:50:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:46.013 22:50:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:46.013 22:50:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:46.013 22:50:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wnXc1gKt1K 00:44:46.013 22:50:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wnXc1gKt1K 00:44:46.270 22:50:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4yrxU3Nde4 00:44:46.270 22:50:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4yrxU3Nde4 00:44:46.528 22:50:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:46.528 22:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:46.785 nvme0n1 00:44:46.785 22:50:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:46.785 22:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:47.043 22:50:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:47.043 "subsystems": [ 00:44:47.043 { 00:44:47.043 "subsystem": "keyring", 00:44:47.043 "config": [ 00:44:47.043 { 00:44:47.043 "method": "keyring_file_add_key", 00:44:47.043 "params": { 00:44:47.043 "name": "key0", 00:44:47.043 "path": "/tmp/tmp.wnXc1gKt1K" 00:44:47.043 } 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "method": "keyring_file_add_key", 00:44:47.043 "params": { 00:44:47.043 "name": "key1", 00:44:47.043 "path": "/tmp/tmp.4yrxU3Nde4" 00:44:47.043 } 00:44:47.043 } 00:44:47.043 ] 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "subsystem": "iobuf", 00:44:47.043 "config": [ 00:44:47.043 { 00:44:47.043 "method": "iobuf_set_options", 00:44:47.043 "params": { 00:44:47.043 "small_pool_count": 8192, 00:44:47.043 "large_pool_count": 1024, 00:44:47.043 "small_bufsize": 8192, 00:44:47.043 "large_bufsize": 135168, 00:44:47.043 "enable_numa": false 00:44:47.043 } 00:44:47.043 } 00:44:47.043 ] 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "subsystem": "sock", 00:44:47.043 "config": [ 00:44:47.043 { 00:44:47.043 "method": "sock_set_default_impl", 00:44:47.043 "params": { 00:44:47.043 "impl_name": "posix" 00:44:47.043 } 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "method": "sock_impl_set_options", 00:44:47.043 "params": { 00:44:47.043 "impl_name": "ssl", 00:44:47.043 "recv_buf_size": 4096, 00:44:47.043 "send_buf_size": 4096, 00:44:47.043 "enable_recv_pipe": true, 00:44:47.043 "enable_quickack": false, 00:44:47.043 "enable_placement_id": 0, 00:44:47.043 "enable_zerocopy_send_server": true, 00:44:47.043 "enable_zerocopy_send_client": false, 00:44:47.043 "zerocopy_threshold": 0, 00:44:47.043 "tls_version": 0, 00:44:47.043 "enable_ktls": false 00:44:47.043 } 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "method": "sock_impl_set_options", 00:44:47.043 "params": { 00:44:47.043 "impl_name": "posix", 00:44:47.043 "recv_buf_size": 2097152, 00:44:47.043 "send_buf_size": 2097152, 00:44:47.043 "enable_recv_pipe": true, 00:44:47.043 "enable_quickack": false, 00:44:47.043 "enable_placement_id": 0, 00:44:47.043 "enable_zerocopy_send_server": true, 00:44:47.043 "enable_zerocopy_send_client": false, 00:44:47.043 "zerocopy_threshold": 0, 00:44:47.043 "tls_version": 0, 00:44:47.043 "enable_ktls": false 00:44:47.043 } 00:44:47.043 } 00:44:47.043 ] 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "subsystem": "vmd", 00:44:47.043 "config": [] 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "subsystem": "accel", 00:44:47.043 "config": [ 00:44:47.043 { 00:44:47.043 "method": "accel_set_options", 00:44:47.043 "params": { 00:44:47.043 "small_cache_size": 128, 00:44:47.043 "large_cache_size": 16, 00:44:47.043 "task_count": 2048, 00:44:47.043 "sequence_count": 2048, 00:44:47.043 "buf_count": 2048 00:44:47.043 } 00:44:47.043 } 00:44:47.043 ] 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "subsystem": "bdev", 00:44:47.043 "config": [ 00:44:47.043 { 00:44:47.043 "method": "bdev_set_options", 00:44:47.043 "params": { 00:44:47.043 "bdev_io_pool_size": 65535, 00:44:47.043 "bdev_io_cache_size": 256, 00:44:47.043 "bdev_auto_examine": true, 00:44:47.043 "iobuf_small_cache_size": 128, 00:44:47.043 "iobuf_large_cache_size": 16 00:44:47.043 } 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "method": "bdev_raid_set_options", 00:44:47.043 "params": { 00:44:47.043 "process_window_size_kb": 1024, 00:44:47.043 "process_max_bandwidth_mb_sec": 0 00:44:47.043 } 00:44:47.043 }, 00:44:47.043 { 00:44:47.043 "method": "bdev_iscsi_set_options", 00:44:47.044 "params": { 00:44:47.044 "timeout_sec": 30 00:44:47.044 } 00:44:47.044 }, 00:44:47.044 { 00:44:47.044 "method": "bdev_nvme_set_options", 00:44:47.044 "params": { 00:44:47.044 "action_on_timeout": "none", 00:44:47.044 "timeout_us": 0, 00:44:47.044 "timeout_admin_us": 0, 00:44:47.044 "keep_alive_timeout_ms": 10000, 00:44:47.044 "arbitration_burst": 0, 00:44:47.044 "low_priority_weight": 0, 00:44:47.044 "medium_priority_weight": 0, 00:44:47.044 "high_priority_weight": 0, 00:44:47.044 "nvme_adminq_poll_period_us": 10000, 00:44:47.044 "nvme_ioq_poll_period_us": 0, 00:44:47.044 "io_queue_requests": 512, 00:44:47.044 "delay_cmd_submit": true, 00:44:47.044 "transport_retry_count": 4, 00:44:47.044 "bdev_retry_count": 3, 00:44:47.044 "transport_ack_timeout": 0, 00:44:47.044 "ctrlr_loss_timeout_sec": 0, 00:44:47.044 "reconnect_delay_sec": 0, 00:44:47.044 "fast_io_fail_timeout_sec": 0, 00:44:47.044 "disable_auto_failback": false, 00:44:47.044 "generate_uuids": false, 00:44:47.044 "transport_tos": 0, 00:44:47.044 "nvme_error_stat": false, 00:44:47.044 "rdma_srq_size": 0, 00:44:47.044 "io_path_stat": false, 00:44:47.044 "allow_accel_sequence": false, 00:44:47.044 "rdma_max_cq_size": 0, 00:44:47.044 "rdma_cm_event_timeout_ms": 0, 00:44:47.044 "dhchap_digests": [ 00:44:47.044 "sha256", 00:44:47.044 "sha384", 00:44:47.044 "sha512" 00:44:47.044 ], 00:44:47.044 "dhchap_dhgroups": [ 00:44:47.044 "null", 00:44:47.044 "ffdhe2048", 00:44:47.044 "ffdhe3072", 00:44:47.044 "ffdhe4096", 00:44:47.044 "ffdhe6144", 00:44:47.044 "ffdhe8192" 00:44:47.044 ], 00:44:47.044 "rdma_umr_per_io": false 00:44:47.044 } 00:44:47.044 }, 00:44:47.044 { 00:44:47.044 "method": "bdev_nvme_attach_controller", 00:44:47.044 "params": { 00:44:47.044 "name": "nvme0", 00:44:47.044 "trtype": "TCP", 00:44:47.044 "adrfam": "IPv4", 00:44:47.044 "traddr": "127.0.0.1", 00:44:47.044 "trsvcid": "4420", 00:44:47.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:47.044 "prchk_reftag": false, 00:44:47.044 "prchk_guard": false, 00:44:47.044 "ctrlr_loss_timeout_sec": 0, 00:44:47.044 "reconnect_delay_sec": 0, 00:44:47.044 "fast_io_fail_timeout_sec": 0, 00:44:47.044 "psk": "key0", 00:44:47.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:47.044 "hdgst": false, 00:44:47.044 "ddgst": false, 00:44:47.044 "multipath": "multipath" 00:44:47.044 } 00:44:47.044 }, 00:44:47.044 { 00:44:47.044 "method": "bdev_nvme_set_hotplug", 00:44:47.044 "params": { 00:44:47.044 "period_us": 100000, 00:44:47.044 "enable": false 00:44:47.044 } 00:44:47.044 }, 00:44:47.044 { 00:44:47.044 "method": "bdev_wait_for_examine" 00:44:47.044 } 00:44:47.044 ] 00:44:47.044 }, 00:44:47.044 { 00:44:47.044 "subsystem": "nbd", 00:44:47.044 "config": [] 00:44:47.044 } 00:44:47.044 ] 00:44:47.044 }' 00:44:47.044 22:50:36 keyring_file -- keyring/file.sh@115 -- # killprocess 659148 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659148 ']' 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659148 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659148 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659148' 00:44:47.044 killing process with pid 659148 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@973 -- # kill 659148 00:44:47.044 Received shutdown signal, test time was about 1.000000 seconds 00:44:47.044 00:44:47.044 Latency(us) 00:44:47.044 [2024-12-16T21:50:36.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:47.044 [2024-12-16T21:50:36.745Z] =================================================================================================================== 00:44:47.044 [2024-12-16T21:50:36.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:47.044 22:50:36 keyring_file -- common/autotest_common.sh@978 -- # wait 659148 00:44:47.303 22:50:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=660627 00:44:47.303 22:50:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 660627 /var/tmp/bperf.sock 00:44:47.303 22:50:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 660627 ']' 00:44:47.303 22:50:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:47.303 22:50:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:47.303 22:50:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.303 22:50:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:47.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:47.303 22:50:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:47.303 "subsystems": [ 00:44:47.303 { 00:44:47.303 "subsystem": "keyring", 00:44:47.303 "config": [ 00:44:47.303 { 00:44:47.303 "method": "keyring_file_add_key", 00:44:47.303 "params": { 00:44:47.303 "name": "key0", 00:44:47.303 "path": "/tmp/tmp.wnXc1gKt1K" 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "keyring_file_add_key", 00:44:47.303 "params": { 00:44:47.303 "name": "key1", 00:44:47.303 "path": "/tmp/tmp.4yrxU3Nde4" 00:44:47.303 } 00:44:47.303 } 00:44:47.303 ] 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "subsystem": "iobuf", 00:44:47.303 "config": [ 00:44:47.303 { 00:44:47.303 "method": "iobuf_set_options", 00:44:47.303 "params": { 00:44:47.303 "small_pool_count": 8192, 00:44:47.303 "large_pool_count": 1024, 00:44:47.303 "small_bufsize": 8192, 00:44:47.303 "large_bufsize": 135168, 00:44:47.303 "enable_numa": false 00:44:47.303 } 00:44:47.303 } 00:44:47.303 ] 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "subsystem": "sock", 00:44:47.303 "config": [ 00:44:47.303 { 00:44:47.303 "method": "sock_set_default_impl", 00:44:47.303 "params": { 00:44:47.303 "impl_name": "posix" 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "sock_impl_set_options", 00:44:47.303 "params": { 00:44:47.303 "impl_name": "ssl", 00:44:47.303 "recv_buf_size": 4096, 00:44:47.303 "send_buf_size": 4096, 00:44:47.303 "enable_recv_pipe": true, 00:44:47.303 "enable_quickack": false, 00:44:47.303 "enable_placement_id": 0, 00:44:47.303 "enable_zerocopy_send_server": true, 00:44:47.303 "enable_zerocopy_send_client": false, 00:44:47.303 "zerocopy_threshold": 0, 00:44:47.303 "tls_version": 0, 00:44:47.303 "enable_ktls": false 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "sock_impl_set_options", 00:44:47.303 "params": { 00:44:47.303 "impl_name": "posix", 00:44:47.303 "recv_buf_size": 2097152, 00:44:47.303 "send_buf_size": 2097152, 00:44:47.303 "enable_recv_pipe": true, 00:44:47.303 "enable_quickack": false, 00:44:47.303 "enable_placement_id": 0, 00:44:47.303 "enable_zerocopy_send_server": true, 00:44:47.303 "enable_zerocopy_send_client": false, 00:44:47.303 "zerocopy_threshold": 0, 00:44:47.303 "tls_version": 0, 00:44:47.303 "enable_ktls": false 00:44:47.303 } 00:44:47.303 } 00:44:47.303 ] 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "subsystem": "vmd", 00:44:47.303 "config": [] 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "subsystem": "accel", 00:44:47.303 "config": [ 00:44:47.303 { 00:44:47.303 "method": "accel_set_options", 00:44:47.303 "params": { 00:44:47.303 "small_cache_size": 128, 00:44:47.303 "large_cache_size": 16, 00:44:47.303 "task_count": 2048, 00:44:47.303 "sequence_count": 2048, 00:44:47.303 "buf_count": 2048 00:44:47.303 } 00:44:47.303 } 00:44:47.303 ] 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "subsystem": "bdev", 00:44:47.303 "config": [ 00:44:47.303 { 00:44:47.303 "method": "bdev_set_options", 00:44:47.303 "params": { 00:44:47.303 "bdev_io_pool_size": 65535, 00:44:47.303 "bdev_io_cache_size": 256, 00:44:47.303 "bdev_auto_examine": true, 00:44:47.303 "iobuf_small_cache_size": 128, 00:44:47.303 "iobuf_large_cache_size": 16 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "bdev_raid_set_options", 00:44:47.303 "params": { 00:44:47.303 "process_window_size_kb": 1024, 00:44:47.303 "process_max_bandwidth_mb_sec": 0 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "bdev_iscsi_set_options", 00:44:47.303 "params": { 00:44:47.303 "timeout_sec": 30 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "bdev_nvme_set_options", 00:44:47.303 "params": { 00:44:47.303 "action_on_timeout": "none", 00:44:47.303 "timeout_us": 0, 00:44:47.303 "timeout_admin_us": 0, 00:44:47.303 "keep_alive_timeout_ms": 10000, 00:44:47.303 "arbitration_burst": 0, 00:44:47.303 "low_priority_weight": 0, 00:44:47.303 "medium_priority_weight": 0, 00:44:47.303 "high_priority_weight": 0, 00:44:47.303 "nvme_adminq_poll_period_us": 10000, 00:44:47.303 "nvme_ioq_poll_period_us": 0, 00:44:47.303 "io_queue_requests": 512, 00:44:47.303 "delay_cmd_submit": true, 00:44:47.303 "transport_retry_count": 4, 00:44:47.303 "bdev_retry_count": 3, 00:44:47.303 "transport_ack_timeout": 0, 00:44:47.303 "ctrlr_loss_timeout_sec": 0, 00:44:47.303 "reconnect_delay_sec": 0, 00:44:47.303 "fast_io_fail_timeout_sec": 0, 00:44:47.303 "disable_auto_failback": false, 00:44:47.303 "generate_uuids": false, 00:44:47.303 "transport_tos": 0, 00:44:47.303 "nvme_error_stat": false, 00:44:47.303 "rdma_srq_size": 0, 00:44:47.303 "io_path_stat": false, 00:44:47.303 "allow_accel_sequence": false, 00:44:47.303 "rdma_max_cq_size": 0, 00:44:47.303 "rdma_cm_event_timeout_ms": 0, 00:44:47.303 "dhchap_digests": [ 00:44:47.303 "sha256", 00:44:47.303 "sha384", 00:44:47.303 "sha512" 00:44:47.303 ], 00:44:47.303 "dhchap_dhgroups": [ 00:44:47.303 "null", 00:44:47.303 "ffdhe2048", 00:44:47.303 "ffdhe3072", 00:44:47.303 "ffdhe4096", 00:44:47.303 "ffdhe6144", 00:44:47.303 "ffdhe8192" 00:44:47.303 ], 00:44:47.303 "rdma_umr_per_io": false 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "bdev_nvme_attach_controller", 00:44:47.303 "params": { 00:44:47.303 "name": "nvme0", 00:44:47.303 "trtype": "TCP", 00:44:47.303 "adrfam": "IPv4", 00:44:47.303 "traddr": "127.0.0.1", 00:44:47.303 "trsvcid": "4420", 00:44:47.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:47.303 "prchk_reftag": false, 00:44:47.303 "prchk_guard": false, 00:44:47.303 "ctrlr_loss_timeout_sec": 0, 00:44:47.303 "reconnect_delay_sec": 0, 00:44:47.303 "fast_io_fail_timeout_sec": 0, 00:44:47.303 "psk": "key0", 00:44:47.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:47.303 "hdgst": false, 00:44:47.303 "ddgst": false, 00:44:47.303 "multipath": "multipath" 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "bdev_nvme_set_hotplug", 00:44:47.303 "params": { 00:44:47.303 "period_us": 100000, 00:44:47.303 "enable": false 00:44:47.303 } 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "method": "bdev_wait_for_examine" 00:44:47.303 } 00:44:47.303 ] 00:44:47.303 }, 00:44:47.303 { 00:44:47.303 "subsystem": "nbd", 00:44:47.303 "config": [] 00:44:47.303 } 00:44:47.303 ] 00:44:47.303 }' 00:44:47.303 22:50:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.303 22:50:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:47.304 [2024-12-16 22:50:36.799315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:47.304 [2024-12-16 22:50:36.799366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660627 ] 00:44:47.304 [2024-12-16 22:50:36.873000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.304 [2024-12-16 22:50:36.894537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:47.561 [2024-12-16 22:50:37.050516] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:48.125 22:50:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.125 22:50:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:48.125 22:50:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:48.125 22:50:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:48.125 22:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.383 22:50:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:48.383 22:50:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:48.383 22:50:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:48.383 22:50:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:48.383 22:50:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:48.383 22:50:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:48.383 22:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.383 22:50:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:48.383 22:50:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:48.383 22:50:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:48.383 22:50:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:48.383 22:50:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:48.383 22:50:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:48.383 22:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.640 22:50:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:48.640 22:50:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:48.640 22:50:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:48.640 22:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:48.898 22:50:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:48.898 22:50:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:48.898 22:50:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.wnXc1gKt1K /tmp/tmp.4yrxU3Nde4 00:44:48.898 22:50:38 keyring_file -- keyring/file.sh@20 -- # killprocess 660627 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 660627 ']' 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 660627 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 660627 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 660627' 00:44:48.898 killing process with pid 660627 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@973 -- # kill 660627 00:44:48.898 Received shutdown signal, test time was about 1.000000 seconds 00:44:48.898 00:44:48.898 Latency(us) 00:44:48.898 [2024-12-16T21:50:38.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:48.898 [2024-12-16T21:50:38.599Z] =================================================================================================================== 00:44:48.898 [2024-12-16T21:50:38.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:48.898 22:50:38 keyring_file -- common/autotest_common.sh@978 -- # wait 660627 00:44:49.157 22:50:38 keyring_file -- keyring/file.sh@21 -- # killprocess 659143 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659143 ']' 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659143 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659143 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659143' 00:44:49.157 killing process with pid 659143 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@973 -- # kill 659143 00:44:49.157 22:50:38 keyring_file -- common/autotest_common.sh@978 -- # wait 659143 00:44:49.416 00:44:49.416 real 0m11.703s 00:44:49.416 user 0m29.110s 00:44:49.416 sys 0m2.704s 00:44:49.416 22:50:38 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:49.416 22:50:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:49.416 ************************************ 00:44:49.416 END TEST keyring_file 00:44:49.416 ************************************ 00:44:49.416 22:50:39 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:49.416 22:50:39 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:49.416 22:50:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:49.416 22:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:49.416 22:50:39 -- common/autotest_common.sh@10 -- # set +x 00:44:49.416 ************************************ 00:44:49.416 START TEST keyring_linux 00:44:49.416 ************************************ 00:44:49.416 22:50:39 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:49.416 Joined session keyring: 893185035 00:44:49.676 * Looking for test storage... 00:44:49.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:49.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:49.676 --rc genhtml_branch_coverage=1 00:44:49.676 --rc genhtml_function_coverage=1 00:44:49.676 --rc genhtml_legend=1 00:44:49.676 --rc geninfo_all_blocks=1 00:44:49.676 --rc geninfo_unexecuted_blocks=1 00:44:49.676 00:44:49.676 ' 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:49.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:49.676 --rc genhtml_branch_coverage=1 00:44:49.676 --rc genhtml_function_coverage=1 00:44:49.676 --rc genhtml_legend=1 00:44:49.676 --rc geninfo_all_blocks=1 00:44:49.676 --rc geninfo_unexecuted_blocks=1 00:44:49.676 00:44:49.676 ' 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:49.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:49.676 --rc genhtml_branch_coverage=1 00:44:49.676 --rc genhtml_function_coverage=1 00:44:49.676 --rc genhtml_legend=1 00:44:49.676 --rc geninfo_all_blocks=1 00:44:49.676 --rc geninfo_unexecuted_blocks=1 00:44:49.676 00:44:49.676 ' 00:44:49.676 22:50:39 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:49.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:49.676 --rc genhtml_branch_coverage=1 00:44:49.676 --rc genhtml_function_coverage=1 00:44:49.676 --rc genhtml_legend=1 00:44:49.676 --rc geninfo_all_blocks=1 00:44:49.676 --rc geninfo_unexecuted_blocks=1 00:44:49.676 00:44:49.676 ' 00:44:49.676 22:50:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:49.676 22:50:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:49.676 22:50:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:49.676 22:50:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:49.676 22:50:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:49.676 22:50:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:49.676 22:50:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:49.676 22:50:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:49.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:49.676 22:50:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:49.676 22:50:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:49.676 22:50:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:49.676 22:50:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:49.677 /tmp/:spdk-test:key0 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:49.677 22:50:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:49.677 22:50:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:49.677 /tmp/:spdk-test:key1 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=661167 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:49.677 22:50:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 661167 00:44:49.677 22:50:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661167 ']' 00:44:49.677 22:50:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:49.677 22:50:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:49.677 22:50:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:49.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:49.677 22:50:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:49.677 22:50:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:49.936 [2024-12-16 22:50:39.396465] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:49.936 [2024-12-16 22:50:39.396514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661167 ] 00:44:49.936 [2024-12-16 22:50:39.469308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:49.936 [2024-12-16 22:50:39.491923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.193 22:50:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.193 22:50:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:50.193 22:50:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:50.193 22:50:39 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.193 22:50:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:50.193 [2024-12-16 22:50:39.679992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:50.193 null0 00:44:50.194 [2024-12-16 22:50:39.712033] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:50.194 [2024-12-16 22:50:39.712330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.194 22:50:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:50.194 547955922 00:44:50.194 22:50:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:50.194 644947580 00:44:50.194 22:50:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=661181 00:44:50.194 22:50:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 661181 /var/tmp/bperf.sock 00:44:50.194 22:50:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661181 ']' 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:50.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:50.194 22:50:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:50.194 [2024-12-16 22:50:39.784207] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:50.194 [2024-12-16 22:50:39.784247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661181 ] 00:44:50.194 [2024-12-16 22:50:39.854659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.194 [2024-12-16 22:50:39.876768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:50.451 22:50:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.451 22:50:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:50.451 22:50:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:50.451 22:50:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:50.451 22:50:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:50.451 22:50:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:50.708 22:50:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:50.708 22:50:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:50.966 [2024-12-16 22:50:40.532010] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:50.966 nvme0n1 00:44:50.966 22:50:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:50.966 22:50:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:50.966 22:50:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:50.966 22:50:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:50.966 22:50:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:50.966 22:50:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:51.223 22:50:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:51.223 22:50:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:51.223 22:50:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:51.223 22:50:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:51.223 22:50:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:51.223 22:50:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:51.223 22:50:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:51.481 22:50:41 keyring_linux -- keyring/linux.sh@25 -- # sn=547955922 00:44:51.482 22:50:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:51.482 22:50:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:51.482 22:50:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 547955922 == \5\4\7\9\5\5\9\2\2 ]] 00:44:51.482 22:50:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 547955922 00:44:51.482 22:50:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:51.482 22:50:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:51.482 Running I/O for 1 seconds... 00:44:52.416 21782.00 IOPS, 85.09 MiB/s 00:44:52.416 Latency(us) 00:44:52.416 [2024-12-16T21:50:42.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:52.416 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:52.416 nvme0n1 : 1.01 21783.11 85.09 0.00 0.00 5856.93 2512.21 7645.87 00:44:52.416 [2024-12-16T21:50:42.117Z] =================================================================================================================== 00:44:52.416 [2024-12-16T21:50:42.117Z] Total : 21783.11 85.09 0.00 0.00 5856.93 2512.21 7645.87 00:44:52.416 { 00:44:52.416 "results": [ 00:44:52.416 { 00:44:52.416 "job": "nvme0n1", 00:44:52.416 "core_mask": "0x2", 00:44:52.416 "workload": "randread", 00:44:52.416 "status": "finished", 00:44:52.416 "queue_depth": 128, 00:44:52.416 "io_size": 4096, 00:44:52.416 "runtime": 1.005871, 00:44:52.416 "iops": 21783.111353245098, 00:44:52.416 "mibps": 85.09027872361366, 00:44:52.416 "io_failed": 0, 00:44:52.416 "io_timeout": 0, 00:44:52.416 "avg_latency_us": 5856.932740284832, 00:44:52.416 "min_latency_us": 2512.213333333333, 00:44:52.416 "max_latency_us": 7645.866666666667 00:44:52.416 } 00:44:52.416 ], 00:44:52.416 "core_count": 1 00:44:52.416 } 00:44:52.673 22:50:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:52.673 22:50:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:52.673 22:50:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:52.673 22:50:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:52.673 22:50:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:52.673 22:50:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:52.673 22:50:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:52.673 22:50:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:52.931 22:50:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:52.931 22:50:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:52.931 22:50:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:52.931 22:50:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:52.931 22:50:42 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:52.931 22:50:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:53.189 [2024-12-16 22:50:42.720773] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:53.189 [2024-12-16 22:50:42.721706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff700 (107): Transport endpoint is not connected 00:44:53.189 [2024-12-16 22:50:42.722701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff700 (9): Bad file descriptor 00:44:53.189 [2024-12-16 22:50:42.723702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:53.189 [2024-12-16 22:50:42.723711] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:53.189 [2024-12-16 22:50:42.723717] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:53.189 [2024-12-16 22:50:42.723725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:53.189 request: 00:44:53.189 { 00:44:53.189 "name": "nvme0", 00:44:53.189 "trtype": "tcp", 00:44:53.189 "traddr": "127.0.0.1", 00:44:53.189 "adrfam": "ipv4", 00:44:53.189 "trsvcid": "4420", 00:44:53.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:53.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:53.189 "prchk_reftag": false, 00:44:53.189 "prchk_guard": false, 00:44:53.189 "hdgst": false, 00:44:53.189 "ddgst": false, 00:44:53.189 "psk": ":spdk-test:key1", 00:44:53.189 "allow_unrecognized_csi": false, 00:44:53.189 "method": "bdev_nvme_attach_controller", 00:44:53.189 "req_id": 1 00:44:53.189 } 00:44:53.189 Got JSON-RPC error response 00:44:53.189 response: 00:44:53.189 { 00:44:53.189 "code": -5, 00:44:53.189 "message": "Input/output error" 00:44:53.189 } 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@33 -- # sn=547955922 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 547955922 00:44:53.189 1 links removed 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@33 -- # sn=644947580 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 644947580 00:44:53.189 1 links removed 00:44:53.189 22:50:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 661181 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661181 ']' 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661181 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661181 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661181' 00:44:53.189 killing process with pid 661181 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 661181 00:44:53.189 Received shutdown signal, test time was about 1.000000 seconds 00:44:53.189 00:44:53.189 Latency(us) 00:44:53.189 [2024-12-16T21:50:42.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:53.189 [2024-12-16T21:50:42.890Z] =================================================================================================================== 00:44:53.189 [2024-12-16T21:50:42.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:53.189 22:50:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 661181 00:44:53.448 22:50:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 661167 00:44:53.448 22:50:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661167 ']' 00:44:53.448 22:50:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661167 00:44:53.448 22:50:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:53.448 22:50:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:53.448 22:50:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661167 00:44:53.448 22:50:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:53.448 22:50:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:53.448 22:50:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661167' 00:44:53.448 killing process with pid 661167 00:44:53.448 22:50:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 661167 00:44:53.448 22:50:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 661167 00:44:53.707 00:44:53.707 real 0m4.254s 00:44:53.707 user 0m8.089s 00:44:53.707 sys 0m1.416s 00:44:53.707 22:50:43 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:53.707 22:50:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:53.707 ************************************ 00:44:53.707 END TEST keyring_linux 00:44:53.707 ************************************ 00:44:53.707 22:50:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:53.707 22:50:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:53.707 22:50:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:53.707 22:50:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:53.707 22:50:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:53.707 22:50:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:53.707 22:50:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:53.707 22:50:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:53.707 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:44:53.707 22:50:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:53.707 22:50:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:53.707 22:50:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:53.707 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:44:58.976 INFO: APP EXITING 00:44:58.976 INFO: killing all VMs 00:44:59.235 INFO: killing vhost app 00:44:59.235 INFO: EXIT DONE 00:45:01.768 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:45:01.768 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:45:01.768 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:45:01.769 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:45:02.027 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:45:02.028 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:45:02.028 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:45:05.315 Cleaning 00:45:05.315 Removing: /var/run/dpdk/spdk0/config 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:05.315 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:05.315 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:05.315 Removing: /var/run/dpdk/spdk1/config 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:05.315 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:05.315 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:05.315 Removing: /var/run/dpdk/spdk2/config 00:45:05.315 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:05.315 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:05.316 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:05.316 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:05.316 Removing: /var/run/dpdk/spdk3/config 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:05.316 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:05.316 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:05.316 Removing: /var/run/dpdk/spdk4/config 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:05.316 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:05.316 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:05.316 Removing: /dev/shm/bdev_svc_trace.1 00:45:05.316 Removing: /dev/shm/nvmf_trace.0 00:45:05.316 Removing: /dev/shm/spdk_tgt_trace.pid104265 00:45:05.316 Removing: /var/run/dpdk/spdk0 00:45:05.316 Removing: /var/run/dpdk/spdk1 00:45:05.316 Removing: /var/run/dpdk/spdk2 00:45:05.316 Removing: /var/run/dpdk/spdk3 00:45:05.316 Removing: /var/run/dpdk/spdk4 00:45:05.316 Removing: /var/run/dpdk/spdk_pid102179 00:45:05.316 Removing: /var/run/dpdk/spdk_pid103211 00:45:05.316 Removing: /var/run/dpdk/spdk_pid104265 00:45:05.316 Removing: /var/run/dpdk/spdk_pid104888 00:45:05.316 Removing: /var/run/dpdk/spdk_pid105810 00:45:05.316 Removing: /var/run/dpdk/spdk_pid105904 00:45:05.316 Removing: /var/run/dpdk/spdk_pid106951 00:45:05.316 Removing: /var/run/dpdk/spdk_pid106999 00:45:05.316 Removing: /var/run/dpdk/spdk_pid107345 00:45:05.316 Removing: /var/run/dpdk/spdk_pid108827 00:45:05.316 Removing: /var/run/dpdk/spdk_pid110125 00:45:05.316 Removing: /var/run/dpdk/spdk_pid110566 00:45:05.316 Removing: /var/run/dpdk/spdk_pid110737 00:45:05.316 Removing: /var/run/dpdk/spdk_pid110941 00:45:05.316 Removing: /var/run/dpdk/spdk_pid111225 00:45:05.316 Removing: /var/run/dpdk/spdk_pid111476 00:45:05.316 Removing: /var/run/dpdk/spdk_pid111720 00:45:05.316 Removing: /var/run/dpdk/spdk_pid112000 00:45:05.316 Removing: /var/run/dpdk/spdk_pid112733 00:45:05.316 Removing: /var/run/dpdk/spdk_pid115792 00:45:05.316 Removing: /var/run/dpdk/spdk_pid116043 00:45:05.316 Removing: /var/run/dpdk/spdk_pid116291 00:45:05.316 Removing: /var/run/dpdk/spdk_pid116300 00:45:05.316 Removing: /var/run/dpdk/spdk_pid117160 00:45:05.316 Removing: /var/run/dpdk/spdk_pid117179 00:45:05.316 Removing: /var/run/dpdk/spdk_pid117653 00:45:05.316 Removing: /var/run/dpdk/spdk_pid117660 00:45:05.316 Removing: /var/run/dpdk/spdk_pid117916 00:45:05.316 Removing: /var/run/dpdk/spdk_pid118065 00:45:05.316 Removing: /var/run/dpdk/spdk_pid118181 00:45:05.316 Removing: /var/run/dpdk/spdk_pid118387 00:45:05.316 Removing: /var/run/dpdk/spdk_pid118745 00:45:05.316 Removing: /var/run/dpdk/spdk_pid118986 00:45:05.316 Removing: /var/run/dpdk/spdk_pid119274 00:45:05.316 Removing: /var/run/dpdk/spdk_pid123124 00:45:05.316 Removing: /var/run/dpdk/spdk_pid127314 00:45:05.316 Removing: /var/run/dpdk/spdk_pid137376 00:45:05.316 Removing: /var/run/dpdk/spdk_pid137961 00:45:05.316 Removing: /var/run/dpdk/spdk_pid142250 00:45:05.316 Removing: /var/run/dpdk/spdk_pid142488 00:45:05.316 Removing: /var/run/dpdk/spdk_pid146684 00:45:05.316 Removing: /var/run/dpdk/spdk_pid152446 00:45:05.316 Removing: /var/run/dpdk/spdk_pid155186 00:45:05.316 Removing: /var/run/dpdk/spdk_pid165710 00:45:05.316 Removing: /var/run/dpdk/spdk_pid174675 00:45:05.316 Removing: /var/run/dpdk/spdk_pid176459 00:45:05.316 Removing: /var/run/dpdk/spdk_pid177363 00:45:05.316 Removing: /var/run/dpdk/spdk_pid194143 00:45:05.316 Removing: /var/run/dpdk/spdk_pid197949 00:45:05.316 Removing: /var/run/dpdk/spdk_pid279523 00:45:05.316 Removing: /var/run/dpdk/spdk_pid284602 00:45:05.316 Removing: /var/run/dpdk/spdk_pid290259 00:45:05.316 Removing: /var/run/dpdk/spdk_pid297367 00:45:05.316 Removing: /var/run/dpdk/spdk_pid297374 00:45:05.316 Removing: /var/run/dpdk/spdk_pid298274 00:45:05.575 Removing: /var/run/dpdk/spdk_pid299027 00:45:05.575 Removing: /var/run/dpdk/spdk_pid299870 00:45:05.575 Removing: /var/run/dpdk/spdk_pid300532 00:45:05.575 Removing: /var/run/dpdk/spdk_pid300534 00:45:05.575 Removing: /var/run/dpdk/spdk_pid300764 00:45:05.575 Removing: /var/run/dpdk/spdk_pid300955 00:45:05.575 Removing: /var/run/dpdk/spdk_pid300989 00:45:05.575 Removing: /var/run/dpdk/spdk_pid301871 00:45:05.575 Removing: /var/run/dpdk/spdk_pid302581 00:45:05.575 Removing: /var/run/dpdk/spdk_pid303447 00:45:05.575 Removing: /var/run/dpdk/spdk_pid304112 00:45:05.575 Removing: /var/run/dpdk/spdk_pid304116 00:45:05.575 Removing: /var/run/dpdk/spdk_pid304339 00:45:05.575 Removing: /var/run/dpdk/spdk_pid305348 00:45:05.575 Removing: /var/run/dpdk/spdk_pid306364 00:45:05.575 Removing: /var/run/dpdk/spdk_pid314403 00:45:05.575 Removing: /var/run/dpdk/spdk_pid343279 00:45:05.575 Removing: /var/run/dpdk/spdk_pid347701 00:45:05.575 Removing: /var/run/dpdk/spdk_pid349270 00:45:05.575 Removing: /var/run/dpdk/spdk_pid351052 00:45:05.575 Removing: /var/run/dpdk/spdk_pid351281 00:45:05.575 Removing: /var/run/dpdk/spdk_pid351295 00:45:05.575 Removing: /var/run/dpdk/spdk_pid351520 00:45:05.575 Removing: /var/run/dpdk/spdk_pid352012 00:45:05.575 Removing: /var/run/dpdk/spdk_pid353797 00:45:05.575 Removing: /var/run/dpdk/spdk_pid354541 00:45:05.575 Removing: /var/run/dpdk/spdk_pid355029 00:45:05.575 Removing: /var/run/dpdk/spdk_pid357067 00:45:05.575 Removing: /var/run/dpdk/spdk_pid357549 00:45:05.575 Removing: /var/run/dpdk/spdk_pid358038 00:45:05.575 Removing: /var/run/dpdk/spdk_pid362326 00:45:05.575 Removing: /var/run/dpdk/spdk_pid367999 00:45:05.575 Removing: /var/run/dpdk/spdk_pid368000 00:45:05.575 Removing: /var/run/dpdk/spdk_pid368001 00:45:05.575 Removing: /var/run/dpdk/spdk_pid371716 00:45:05.575 Removing: /var/run/dpdk/spdk_pid375426 00:45:05.575 Removing: /var/run/dpdk/spdk_pid380282 00:45:05.575 Removing: /var/run/dpdk/spdk_pid416003 00:45:05.575 Removing: /var/run/dpdk/spdk_pid419865 00:45:05.575 Removing: /var/run/dpdk/spdk_pid425930 00:45:05.575 Removing: /var/run/dpdk/spdk_pid427198 00:45:05.575 Removing: /var/run/dpdk/spdk_pid428496 00:45:05.575 Removing: /var/run/dpdk/spdk_pid429784 00:45:05.575 Removing: /var/run/dpdk/spdk_pid434232 00:45:05.575 Removing: /var/run/dpdk/spdk_pid438479 00:45:05.575 Removing: /var/run/dpdk/spdk_pid442390 00:45:05.575 Removing: /var/run/dpdk/spdk_pid450142 00:45:05.575 Removing: /var/run/dpdk/spdk_pid450295 00:45:05.575 Removing: /var/run/dpdk/spdk_pid454772 00:45:05.575 Removing: /var/run/dpdk/spdk_pid454995 00:45:05.575 Removing: /var/run/dpdk/spdk_pid455213 00:45:05.575 Removing: /var/run/dpdk/spdk_pid455538 00:45:05.575 Removing: /var/run/dpdk/spdk_pid455662 00:45:05.575 Removing: /var/run/dpdk/spdk_pid457018 00:45:05.575 Removing: /var/run/dpdk/spdk_pid458595 00:45:05.575 Removing: /var/run/dpdk/spdk_pid460154 00:45:05.575 Removing: /var/run/dpdk/spdk_pid461844 00:45:05.575 Removing: /var/run/dpdk/spdk_pid463473 00:45:05.575 Removing: /var/run/dpdk/spdk_pid465037 00:45:05.575 Removing: /var/run/dpdk/spdk_pid470979 00:45:05.575 Removing: /var/run/dpdk/spdk_pid471442 00:45:05.575 Removing: /var/run/dpdk/spdk_pid473236 00:45:05.575 Removing: /var/run/dpdk/spdk_pid474170 00:45:05.575 Removing: /var/run/dpdk/spdk_pid479848 00:45:05.834 Removing: /var/run/dpdk/spdk_pid482334 00:45:05.834 Removing: /var/run/dpdk/spdk_pid488033 00:45:05.834 Removing: /var/run/dpdk/spdk_pid493355 00:45:05.834 Removing: /var/run/dpdk/spdk_pid501861 00:45:05.834 Removing: /var/run/dpdk/spdk_pid508723 00:45:05.834 Removing: /var/run/dpdk/spdk_pid508780 00:45:05.834 Removing: /var/run/dpdk/spdk_pid527417 00:45:05.834 Removing: /var/run/dpdk/spdk_pid527881 00:45:05.834 Removing: /var/run/dpdk/spdk_pid528360 00:45:05.834 Removing: /var/run/dpdk/spdk_pid529027 00:45:05.834 Removing: /var/run/dpdk/spdk_pid529637 00:45:05.834 Removing: /var/run/dpdk/spdk_pid530213 00:45:05.834 Removing: /var/run/dpdk/spdk_pid530685 00:45:05.834 Removing: /var/run/dpdk/spdk_pid531273 00:45:05.834 Removing: /var/run/dpdk/spdk_pid535842 00:45:05.834 Removing: /var/run/dpdk/spdk_pid536074 00:45:05.834 Removing: /var/run/dpdk/spdk_pid542010 00:45:05.834 Removing: /var/run/dpdk/spdk_pid542072 00:45:05.834 Removing: /var/run/dpdk/spdk_pid547431 00:45:05.834 Removing: /var/run/dpdk/spdk_pid551586 00:45:05.834 Removing: /var/run/dpdk/spdk_pid561093 00:45:05.834 Removing: /var/run/dpdk/spdk_pid561551 00:45:05.834 Removing: /var/run/dpdk/spdk_pid565725 00:45:05.834 Removing: /var/run/dpdk/spdk_pid565955 00:45:05.834 Removing: /var/run/dpdk/spdk_pid570031 00:45:05.834 Removing: /var/run/dpdk/spdk_pid575641 00:45:05.834 Removing: /var/run/dpdk/spdk_pid578355 00:45:05.834 Removing: /var/run/dpdk/spdk_pid588235 00:45:05.834 Removing: /var/run/dpdk/spdk_pid596904 00:45:05.834 Removing: /var/run/dpdk/spdk_pid598469 00:45:05.834 Removing: /var/run/dpdk/spdk_pid599365 00:45:05.834 Removing: /var/run/dpdk/spdk_pid615146 00:45:05.834 Removing: /var/run/dpdk/spdk_pid618937 00:45:05.834 Removing: /var/run/dpdk/spdk_pid621557 00:45:05.834 Removing: /var/run/dpdk/spdk_pid629639 00:45:05.834 Removing: /var/run/dpdk/spdk_pid629645 00:45:05.834 Removing: /var/run/dpdk/spdk_pid634591 00:45:05.834 Removing: /var/run/dpdk/spdk_pid636505 00:45:05.834 Removing: /var/run/dpdk/spdk_pid638419 00:45:05.834 Removing: /var/run/dpdk/spdk_pid639440 00:45:05.834 Removing: /var/run/dpdk/spdk_pid641352 00:45:05.834 Removing: /var/run/dpdk/spdk_pid642606 00:45:05.834 Removing: /var/run/dpdk/spdk_pid651159 00:45:05.834 Removing: /var/run/dpdk/spdk_pid651614 00:45:05.834 Removing: /var/run/dpdk/spdk_pid652061 00:45:05.834 Removing: /var/run/dpdk/spdk_pid654341 00:45:05.834 Removing: /var/run/dpdk/spdk_pid654885 00:45:05.834 Removing: /var/run/dpdk/spdk_pid655399 00:45:05.834 Removing: /var/run/dpdk/spdk_pid659143 00:45:05.834 Removing: /var/run/dpdk/spdk_pid659148 00:45:05.834 Removing: /var/run/dpdk/spdk_pid660627 00:45:05.834 Removing: /var/run/dpdk/spdk_pid661167 00:45:05.834 Removing: /var/run/dpdk/spdk_pid661181 00:45:05.834 Clean 00:45:06.092 22:50:55 -- common/autotest_common.sh@1453 -- # return 0 00:45:06.092 22:50:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:06.092 22:50:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:06.092 22:50:55 -- common/autotest_common.sh@10 -- # set +x 00:45:06.092 22:50:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:06.092 22:50:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:06.092 22:50:55 -- common/autotest_common.sh@10 -- # set +x 00:45:06.092 22:50:55 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:06.092 22:50:55 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:06.092 22:50:55 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:06.092 22:50:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:06.092 22:50:55 -- spdk/autotest.sh@398 -- # hostname 00:45:06.092 22:50:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:06.350 geninfo: WARNING: invalid characters removed from testname! 00:45:28.270 22:51:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:29.644 22:51:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:31.545 22:51:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:33.443 22:51:22 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:35.344 22:51:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:37.250 22:51:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:39.149 22:51:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:39.150 22:51:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:39.150 22:51:28 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:39.150 22:51:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:39.150 22:51:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:39.150 22:51:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:39.150 + [[ -n 7548 ]] 00:45:39.150 + sudo kill 7548 00:45:39.159 [Pipeline] } 00:45:39.172 [Pipeline] // stage 00:45:39.177 [Pipeline] } 00:45:39.190 [Pipeline] // timeout 00:45:39.195 [Pipeline] } 00:45:39.207 [Pipeline] // catchError 00:45:39.212 [Pipeline] } 00:45:39.225 [Pipeline] // wrap 00:45:39.230 [Pipeline] } 00:45:39.242 [Pipeline] // catchError 00:45:39.250 [Pipeline] stage 00:45:39.252 [Pipeline] { (Epilogue) 00:45:39.263 [Pipeline] catchError 00:45:39.265 [Pipeline] { 00:45:39.276 [Pipeline] echo 00:45:39.278 Cleanup processes 00:45:39.283 [Pipeline] sh 00:45:39.568 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:39.568 673409 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:39.586 [Pipeline] sh 00:45:39.921 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:39.921 ++ grep -v 'sudo pgrep' 00:45:39.921 ++ awk '{print $1}' 00:45:39.921 + sudo kill -9 00:45:39.921 + true 00:45:39.954 [Pipeline] sh 00:45:40.279 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:52.496 [Pipeline] sh 00:45:52.779 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:52.779 Artifacts sizes are good 00:45:52.793 [Pipeline] archiveArtifacts 00:45:52.800 Archiving artifacts 00:45:53.198 [Pipeline] sh 00:45:53.484 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:53.498 [Pipeline] cleanWs 00:45:53.507 [WS-CLEANUP] Deleting project workspace... 00:45:53.507 [WS-CLEANUP] Deferred wipeout is used... 00:45:53.514 [WS-CLEANUP] done 00:45:53.516 [Pipeline] } 00:45:53.533 [Pipeline] // catchError 00:45:53.545 [Pipeline] sh 00:45:53.828 + logger -p user.info -t JENKINS-CI 00:45:53.839 [Pipeline] } 00:45:53.856 [Pipeline] // stage 00:45:53.861 [Pipeline] } 00:45:53.873 [Pipeline] // node 00:45:53.877 [Pipeline] End of Pipeline 00:45:53.916 Finished: SUCCESS